Connect with us

News

Tesla Autopilot veterans launch company to accelerate self-driving development

Published

on

After working on Tesla’s Autopilot team for 2.5 years, Andrew Kouri and Erik Reed decided to start their own self-driving, AI-based company rightfully named lvl5. Together with iRobot engineer George Tal, lvl5 aims to develop advanced vision software and HD maps for self-driving cars.

Founded in 2016, lvl5 was incubated at renown Silicon Valley incubator Y Combinator and later raised $2 million in seed funding from investor Paul Buchheit, who’s a partner at Y Combinator and creator of Gmail, and Max Altman’s 9Point Ventures.

In just 3 months, lvl5 racked up almost 500,000 miles of US roadway coverage with Payver. (Photo: lvl5)

“Working with lvl5’s founders while they were at Y Combinator, it was clear they have unmatched expertise in computer vision, which is the secret sauce of their solution,” said Buchheit. “I have no doubt this is the team to make self-driving a reality in the near term.”

At the center of lvl5’s technology is their computer vision algorithms. Founder and CTO George Tall previously specialized in computer vision technology at iRobot. In addition to Tall’s experience at iRobot, Kouri and Reed’s experience at Tesla undoubtedly left them with unparalleled expertise in computer vision.

Instead of turning to expensive LiDAR technology, lvl5’s computer vision analyzes its environment for stoplights, signs, potholes, and other objects. The system can be accurate to 10cm, a notable measure considering it’s derived from simple cameras and smartphones. In comparison, LiDAR systems can cost over $80,000 but are accurate to 3cm.

So how will lvl5 map roadways in the world using their computer vision technology? Smartphones. Well, for now at least. The company has released an app called Payver that allows anyone’s smartphone to collect data while driving and get paid between $.01-$.05 per mile, depending on a number of factors. Users of the app place their phone in a mount on their dashboard and let the app gather driving data.

Advertisement

The data is sent to lvl5’s central hub and processed by their computer vision technology. “Lvl5 is solving one of the biggest obstacles to widespread availability of self-driving technology,” said Max Altman, one of lvl5’s seed round investors and partner at 9Point Ventures. “Without accurate and efficient HD mapping, as well as the computer vision software that enables it, self-driving vehicles will take much longer to reach mass-market. This will delay everything from safer roads to efficient delivery services.”

via GIPHY
GIF: lvl5

“We have to make self-driving available worldwide – not just in California,” Co-Founder and CEO Andrew Kouri said in a company statement. “Our approach, which combines computer vision software, crowdsourcing and widely available, affordable hardware, means our technology is accessible and will make self-driving a reality today, rather than five years from now.”

The company has already established pilot programs with major automakers and both Uber and Lyft. Companies will pay lvl5 an initial fee to use the maps, along with a monthly subscription to keep the maps continuously updated. “Through its OEM-agnostic approach, lvl5 will be able to collect significant amounts of mapping data from millions of cars in order to scale the technology for the benefit of drivers and pedestrians around the world,” the company’s press release states.

Advertisement

Christian Prenzler is currently the VP of Business Development at Teslarati, leading strategic partnerships, content development, email newsletters, and subscription programs. Additionally, Christian thoroughly enjoys investigating pivotal moments in the emerging mobility sector and sharing these stories with Teslarati's readers. He has been closely following and writing on Tesla and disruptive technology for over seven years. You can contact Christian here: christian@teslarati.com

Advertisement
Comments

News

Tesla Model 3 gets perfect 5-star Euro NCAP safety rating

Tesla prides itself on producing some of the safest vehicles on the road today.

Published

on

Credit: Tesla Singapore/X

Tesla prides itself on producing some of the safest vehicles on the road today. Based on recent findings from the Euro NCAP, the 2025 Model 3 sedan continues this tradition, with the vehicle earning a 5-star overall safety rating from the agency.

Standout Safety Features

As could be seen on the Euro NCAP’s official website, the 2025 Model 3 achieved an overall score of 90% for Adult Occupants, 93% for Child Occupants, 89% for Vulnerable Road Users, and 87% for Safety Assist. This rating, as per the Euro NCAP, applies to the Model 3 Rear Wheel Drive, Long Range Rear Wheel Drive, Long Range All Wheel Drive, and Performance All Wheel Drive.

The Euro NCAP highlighted a number of the Model 3’s safety features, such as its Active Hood, which automatically lifts during collisions to mitigate injury risks to vulnerable road users, and Automatic Emergency Braking System, which now detects motorcycles through an upgraded algorithm. The Euro NCAP also mentioned the Model 3’s feature that prevents initial door opening if someone is approaching the vehicle’s blind spot.

Standout Safety Features

In a post on its official Tesla Europe & Middle East account, Tesla noted that the company is also introducing new features that make the Model 3 even safer than it is today. These include functions like head-on collision avoidance and crossing traffic AEB, as well as Child Left Alone Detection, among other safety features.

“We also introduced new features to improve Safety Assist functionality even further – like head-on collision avoidance & crossing traffic AEB – to detect & respond to potential hazards faster, helping avoid accidents in the first place. 

Advertisement

“Lastly, we released Child Left Alone Detection – if an unattended child is detected, the vehicle will turn on HVAC & alert caregivers via phone app & the vehicle itself (flashing lights/audible alert). Because we’re using novel in-cabin radar sensing, your Tesla is able to distinguish between adult vs child – reduced annoyance to adults, yet critical safety feature for kids,” Tesla wrote in its post on X.

Below is the Euro NCAP’s safety report on the 2025 Tesla Model 3 sedan.

Euroncap 2025 Tesla Model 3 Datasheet by Simon Alvarez on Scribd

Continue Reading

Elon Musk

USDOT Secretary visits Tesla Giga Texas, hints at national autonomous vehicle standards

The Transportation Secretary also toured the factory’s production lines and spoke with CEO Elon Musk.

Published

on

Credit: Elon Musk/X

United States Department of Transportation (USDOT) Secretary Sean Duffy recently visited Tesla’s Gigafactory Texas complex, where he toured the factory’s production lines and spoke with CEO Elon Musk. In a video posted following his Giga Texas visit, Duffy noted that he believes there should be a national standard for autonomous vehicles in the United States.

Duffy’s Giga Texas Visit

As could be seen in videos of his Giga Texas visit, the Transportation Secretary seemed to appreciate the work Tesla has been doing to put the United States in the forefront of innovation. “Tesla is one of the many companies helping our country reach new heights. USDOT will be right there all the way to make sure Americans stay safe,” Duffy wrote in a post on X. 

He also praised Tesla for its autonomous vehicle program, highlighting that “We need American companies to keep innovating so we can outcompete the rest of the world.”

National Standard

While speaking with Tesla CEO Elon Musk, the Transportation Secretary stated that other autonomous ride-hailing companies have been lobbying for a national standard for self-driving cars. Musk shared the sentiment, stating that “It’d be wonderful for the United States to have a national set of rules for autonomous driving as opposed to 50 independent sets of rules on a state-by-state rules basis.”

Duffy agreed with the CEO’s point, stating that, “You can’t have 50 different rules for 50 different states. You need one standard.” He also noted that the Transportation Department has asked autonomous vehicle companies to submit data. By doing so, the USDOT could develop a standard for the entire United States, allowing self-driving cars to operate in a manner that is natural and safe.

Advertisement
Continue Reading

News

Tesla posts Optimus’ most impressive video demonstration yet

The humanoid robot was able to complete all the tasks through a single neural network.

Published

on

Credit: Tesla Optimus/X

When Elon Musk spoke with CNBC’s David Faber in an interview at Giga Texas, he reiterated the idea that Optimus will be one of Tesla’s biggest products. Seemingly to highlight the CEO’s point, the official Tesla Optimus account on social media platform X shared what could very well be the most impressive demonstration of the humanoid robot’s capabilities to date.

Optimus’ Newest Demonstration

In its recent video demonstration, the Tesla Optimus team featured the humanoid robot performing a variety of tasks. These include household chores such as throwing the trash, using a broom and a vacuum cleaner, tearing a paper towel, stirring a pot of food, opening a cabinet, and closing a curtain, among others. The video also featured Optimus picking up a Model X fore link and placing it on a dolly.

What was most notable in the Tesla Optimus team’s demonstration was the fact that the humanoid robot was able to complete all the tasks through a single neural network. The robot’s actions were also learned directly from Optimus being fed data from first-person videos of humans performing similar tasks. This system should pave the way for Optimus to learn and refine new skills quickly and reliably.

Tesla VP for Optimus Shares Insight

In a follow-up post on X, Tesla Vice President of Optimus (Tesla Bot) Milan Kovac stated that one of the team’s goals is to have Optimus learn straight from internet videos of humans performing tasks, including footage captured in third person or by random cameras.

“We recently had a significant breakthrough along that journey, and can now transfer a big chunk of the learning directly from human videos to the bots (1st person views for now). This allows us to bootstrap new tasks much faster compared to teleoperated bot data alone (heavier operationally).

Advertisement

“Many new skills are emerging through this process, are called for via natural language (voice/text), and are run by a single neural network on the bot (multi-tasking). Next: expand to 3rd person video transfer (aka random internet), and push reliability via self-play (RL) in the real-, and/or synthetic- (sim / world models) world,” Kovac wrote in his post on X.

Continue Reading

Trending