News
Tesla Autopilot veterans launch company to accelerate self-driving development
After working on Tesla’s Autopilot team for 2.5 years, Andrew Kouri and Erik Reed decided to start their own self-driving, AI-based company rightfully named lvl5. Together with iRobot engineer George Tal, lvl5 aims to develop advanced vision software and HD maps for self-driving cars.
Founded in 2016, lvl5 was incubated at renown Silicon Valley incubator Y Combinator and later raised $2 million in seed funding from investor Paul Buchheit, who’s a partner at Y Combinator and creator of Gmail, and Max Altman’s 9Point Ventures.

In just 3 months, lvl5 racked up almost 500,000 miles of US roadway coverage with Payver. (Photo: lvl5)
“Working with lvl5’s founders while they were at Y Combinator, it was clear they have unmatched expertise in computer vision, which is the secret sauce of their solution,” said Buchheit. “I have no doubt this is the team to make self-driving a reality in the near term.”
At the center of lvl5’s technology is their computer vision algorithms. Founder and CTO George Tall previously specialized in computer vision technology at iRobot. In addition to Tall’s experience at iRobot, Kouri and Reed’s experience at Tesla undoubtedly left them with unparalleled expertise in computer vision.
Instead of turning to expensive LiDAR technology, lvl5’s computer vision analyzes its environment for stoplights, signs, potholes, and other objects. The system can be accurate to 10cm, a notable measure considering it’s derived from simple cameras and smartphones. In comparison, LiDAR systems can cost over $80,000 but are accurate to 3cm.
- Each purple trace through the intersection contributes to building the 3D map from a 2D image. For each frame, lvl5’s computer vision technology computes the position of the vehicle relative to other objects in the intersection and create a point cloud that resembles the output from LiDAR. Each white sideways “pyramid” represents the location of a captured frame in the video trace. (Photo: lvl5)
- This image is taken from one of lvl5’s neural nets, which is designed to draw a box around the position of traffic lights in an image. (Photo: lvl5)
- With only two trips through this intersection, lvl5 can start to extract semantic features such as a stop sign. (Photo: lvl5)
- The three founders of lvl5 in front of their SF home. Left to right: Erik Reed, Andrew Kouri, George Tall (Photo: Lvl5)
So how will lvl5 map roadways in the world using their computer vision technology? Smartphones. Well, for now at least. The company has released an app called Payver that allows anyone’s smartphone to collect data while driving and get paid between $.01-$.05 per mile, depending on a number of factors. Users of the app place their phone in a mount on their dashboard and let the app gather driving data.
The data is sent to lvl5’s central hub and processed by their computer vision technology. “Lvl5 is solving one of the biggest obstacles to widespread availability of self-driving technology,” said Max Altman, one of lvl5’s seed round investors and partner at 9Point Ventures. “Without accurate and efficient HD mapping, as well as the computer vision software that enables it, self-driving vehicles will take much longer to reach mass-market. This will delay everything from safer roads to efficient delivery services.”
GIF: lvl5
“We have to make self-driving available worldwide – not just in California,” Co-Founder and CEO Andrew Kouri said in a company statement. “Our approach, which combines computer vision software, crowdsourcing and widely available, affordable hardware, means our technology is accessible and will make self-driving a reality today, rather than five years from now.”
The company has already established pilot programs with major automakers and both Uber and Lyft. Companies will pay lvl5 an initial fee to use the maps, along with a monthly subscription to keep the maps continuously updated. “Through its OEM-agnostic approach, lvl5 will be able to collect significant amounts of mapping data from millions of cars in order to scale the technology for the benefit of drivers and pedestrians around the world,” the company’s press release states.
News
Tesla Cybercab display highlights interior wizardry in the small two-seater
Photos and videos of the production Cybercab were shared in posts on social media platform X.
The Tesla Cybercab is currently on display at the U.S. Department of Transportation in Washington, D.C., and observations of the production vehicle are highlighting some of its notable design details.
Photos and videos of the production Cybercab were shared in posts on social media platform X.
Observers of the Cybercab display unit noted that the two-seat Robotaxi provides unusually generous legroom for a vehicle of its size. Based on the vehicle’s video, the compact two-seater appears to offer more legroom than Tesla’s larger vehicles such as the Model Y, Model X, and Cybertruck.
The Cybercab’s layout allows Tesla to dedicate nearly the entire cabin to passengers. The vehicle is designed without a steering wheel or pedals, which helps maximize interior space.
Footage from the display also highlights the Cybercab’s large center screen, which is positioned prominently in front of the passenger bench. The display appears intended to provide entertainment and ride information while the vehicle operates autonomously.
Images of the vehicle also show an additional camera integrated into the Cybercab’s C-pillar. The extra camera appears to expand the vehicle’s field of view, which would be useful as Tesla works toward fully unsupervised Full Self-Driving.
Tesla engineers have previously explained that the Cybercab was designed to be highly efficient both in manufacturing and in operation. Cybercab Lead Engineer Eric E. stated in 2024 that the Robotaxi would be built with roughly half the number of parts used in a Model 3 sedan.
“Two seats unlocks a lot of opportunity aerodynamically. It also means we cut the part count of Cybercab down by a substantial margin. We’re gonna be delivering a car that has roughly half the parts of Model 3 today,” the Tesla engineer said.
The Tesla engineer also noted that the Cybercab’s cargo area can accommodate multiple golf bags, two carry-on suitcases, and two full-size checked bags. The trunk can also fit certain bicycles and a foldable wheelchair depending on size, which is quite impressive for a small car like the Cybercab.
Elon Musk
Elon Musk’s xAI wins permit for power plant supporting AI data centers
The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).
Mississippi regulators have approved a permit allowing Elon Musk’s artificial intelligence company xAI to construct a natural gas power plant in Southaven. The facility is expected to support the company’s expanding AI infrastructure tied to its Colossus data center operations near Memphis.
The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).
According to the report, regulators “voted to approve the permit” of xAI subsidiary MZX Tech LLC to construct a power plant featuring 41 natural gas-burning turbines “after careful consideration of all public comments and community concerns.”
The Mississippi Department of Environmental Quality stated that the permit followed a regulatory review process that included public comments and community input. Jaricus Whitlock, air division chief for the MDEQ, stated that the project met all applicable environmental standards.
“The proposed PSD permit in front of the board today not only meets all state and federal permitting regulations, but goes above and beyond what is required by law. MDEQ and the EPA agree that not a single person around our facilities will be exposed to unhealthy levels of air pollution,” Whitlock stated.
The planned facility will help provide electricity for xAI’s AI computing infrastructure in the Memphis region.
The Southaven project forms part of xAI’s efforts to scale computing capacity for its artificial intelligence systems.
The company currently operates two major data centers in Memphis, known as Colossus 1 and Colossus 2, which provide computing power for xAI’s Grok AI models. xAI is also planning to build another large data center in Southaven called Macrohardrr, which would be located in a warehouse previously used by GXO Logistics.
Large-scale AI training requires substantial computing power and electricity, prompting technology companies to develop dedicated energy infrastructure for their data centers.
SpaceX President Gwynne Shotwell previously stated that xAI plans to develop 1.2 gigawatts of power capacity for its Memphis-area AI supercomputer site as part of the federal government’s Ratepayer Protection Pledge. The commitment was announced during an event with United States President Donald Trump.
“As part of today’s commitment, we will take extensive additional steps to continue to reduce the costs of electricity for our neighbors. xAI will therefore commit to develop 1.2 GW of power as our supercomputer’s primary power source. That will be for every additional data center as well. We will expand what is already the largest global Megapack power installation in the world,” Shotwell said.
“The installation will provide enough backup power to power the city of Memphis, and more than sufficient energy to power the town of Southaven, Mississippi where the data center resides. We will build new substations and invest in electrical infrastructure to provide stability to the area’s grid.”
Elon Musk
Tesla China teases Optimus robot’s human-looking next-gen hands
The image was shared by Tesla AI’s account on Weibo and later reposted by Tesla community members on X.
A new teaser shared by Tesla’s China team appears to show a pair of unusually human-like hands for Optimus.
The image was shared by Tesla AI’s account on Weibo and later reposted by Tesla community members on X.
As could be seen in the teaser image, the new version of Optimus’ hands features proportions and finger structures that look strikingly similar to those of a human hand. Their appearance suggests that they might have dexterity approaching that of a human hand.
If the image reflects a new generation of Optimus’ hands, it could indicate Tesla is continuing to refine one of the most critical components of its humanoid robot.
Hands are widely viewed as one of the most difficult engineering challenges in robotics. For Optimus to perform complex real-world work, from manufacturing tasks to household activities, its hands would need to be the best in the industry.
Elon Musk has repeatedly described Optimus as Tesla’s most important long-term product. In posts on social media platform X, Musk has stated that Optimus could eventually become the first real-world Von Neumann machine.
In theory, a Von Neumann machine is a self-replicating system capable of building copies of itself using available materials. The concept was originally proposed by mathematician John von Neumann in the mid-20th century.
“Optimus will be the first Von Neumann machine, capable of building civilization by itself on any viable planet,” Musk wrote in a post on X.
If Optimus is expected to carry out complex work autonomously in the future, high levels of dexterity will likely be essential. This makes the development of advanced robotic hands a key step towards Musk’s long-term expectations for the product.


