News
Tesla is patenting a clever way to train Autopilot with augmented camera images
Tesla is currently tackling what could only be described as its biggest challenge to date. In his Master Plan, Part Deux, CEO Elon Musk envisioned a fleet of zero-emissions vehicles that are capable of driving on their own. Tesla has made steps towards this goal with improvements and refinements to its Autopilot and Full Self-Driving suites, but a lot of work remains to be done.
As noted by Tesla during its Autonomy Day presentation last year, attaining Full Self-Driving is largely a matter of training the neural networks used by the company. Tesla adopts what could be described as a somewhat organic approach for autonomy, with the company using a system that is centered on cameras and artificial intelligence — the equivalent of a human primarily using the eyes and brain to drive.
Tesla’s camera-centric approach may be quite controversial due to Elon Musk’s strong stance against LiDAR, but it is gaining ground, with other autonomous vehicle companies such as MobilEye developing FSD systems that rely primarily on visual data and a trained neural network. This approach does come with its challenges, as training neural networks requires tons of data. Tesla emphasized this point as much during its Autonomy Day presentation.
With this in mind, it is pertinent for the electric car maker to train its neural networks in a way that is as efficient as possible with zero compromises. To help accomplish this, Tesla seems to be looking into the utilization of augmented data, as described in a recently published patent titled “Systems and Methods for Training Machine Models with Augmented Data.”

Teslas are equipped with a suite of cameras that provide 360-degree visual coverage for the vehicle. In the patent’s description, Tesla noted that images used for neural network training are usually captured by various sensors, which, at times, have different characteristics. An example of this may lie in a Tesla’s three forward-facing cameras, each of which has a different field of view and range as the other two.
Tesla’s recent patent describes a system that allows the company to process these images in an optimized manner. Part of how this is done is through augmentation, which opens the doors to flexible and widespread neural network training, even when it involves vehicles equipped with differently-specced cameras. The electric car maker describes this process as such:
“Augmentation may provide generalization and greater robustness to the model prediction, particularly when images are clouded, occluded, or otherwise do not provide clear views of the detectable objects. These approaches may be particularly useful for object detection and in autonomous vehicles. This approach may also be beneficial for other situations in which the same camera configurations may be deployed to many devices. Since these devices may have a consistent set of sensors in a consistent orientation, the training data may be collected with a given configuration, a model may be trained with augmented data from the collected training data, and the trained model may be deployed to devices having the same configuration.”
Among the most notable aspects of Tesla’s recent patent is the use of “cutouts,” which allow Tesla’s neural networks to be trained using an optimized set of images. This was something that was discussed by former Tesla Autopilot engineer Eshak Mir in a Third Row Podcast interview, where he hinted at a system adopted in the electric car maker’s ongoing Autopilot rewrite that helped lay out “all the camera images” from a vehicle “into one view.” Such a process has the potential to help Tesla with 3D labeling, especially since the images used for neural network training are stitched together. Tesla’s patent seems to reference a system that is very similar to that described by the former Autopilot engineer.
“As a further example, the images may be augmented with a“cutout” function that removes a portion of the original image. The removed portion of the image may then be replaced with other image content, such as a specified color, blur, noise, or from another image. The number, size, region, and replacement content for cutouts may be varied and may be based on the label of the image (e.g., the region of interest in the image, or a bounding box for an object).”
Tesla is aiming to release a feature-complete version of its Full Self-Driving suite as soon as possible. Elon Musk remains optimistic about this, despite the company missing its initial timeline that was set at the end of 2019. That being said, Elon Musk did mention previously that Tesla is working on a foundational rewrite of Autopilot. In a tweet early last month, Musk stated that an essential part of the rewrite involves work on Autopilot’s core foundation code and 3D labeling. Once done, the CEO indicated that additional functionalities could be rolled out quickly. This recent patent, if any, seems to give a glimpse at how these improvements are being done.
News
Tesla Cybercab display highlights interior wizardry in the small two-seater
Photos and videos of the production Cybercab were shared in posts on social media platform X.
The Tesla Cybercab is currently on display at the U.S. Department of Transportation in Washington, D.C., and observations of the production vehicle are highlighting some of its notable design details.
Photos and videos of the production Cybercab were shared in posts on social media platform X.
Observers of the Cybercab display unit noted that the two-seat Robotaxi provides unusually generous legroom for a vehicle of its size. Based on the vehicle’s video, the compact two-seater appears to offer more legroom than Tesla’s larger vehicles such as the Model Y, Model X, and Cybertruck.
The Cybercab’s layout allows Tesla to dedicate nearly the entire cabin to passengers. The vehicle is designed without a steering wheel or pedals, which helps maximize interior space.
Footage from the display also highlights the Cybercab’s large center screen, which is positioned prominently in front of the passenger bench. The display appears intended to provide entertainment and ride information while the vehicle operates autonomously.
Images of the vehicle also show an additional camera integrated into the Cybercab’s C-pillar. The extra camera appears to expand the vehicle’s field of view, which would be useful as Tesla works toward fully unsupervised Full Self-Driving.
Tesla engineers have previously explained that the Cybercab was designed to be highly efficient both in manufacturing and in operation. Cybercab Lead Engineer Eric E. stated in 2024 that the Robotaxi would be built with roughly half the number of parts used in a Model 3 sedan.
“Two seats unlocks a lot of opportunity aerodynamically. It also means we cut the part count of Cybercab down by a substantial margin. We’re gonna be delivering a car that has roughly half the parts of Model 3 today,” the Tesla engineer said.
The Tesla engineer also noted that the Cybercab’s cargo area can accommodate multiple golf bags, two carry-on suitcases, and two full-size checked bags. The trunk can also fit certain bicycles and a foldable wheelchair depending on size, which is quite impressive for a small car like the Cybercab.
Elon Musk
Elon Musk’s xAI wins permit for power plant supporting AI data centers
The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).
Mississippi regulators have approved a permit allowing Elon Musk’s artificial intelligence company xAI to construct a natural gas power plant in Southaven. The facility is expected to support the company’s expanding AI infrastructure tied to its Colossus data center operations near Memphis.
The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).
According to the report, regulators “voted to approve the permit” of xAI subsidiary MZX Tech LLC to construct a power plant featuring 41 natural gas-burning turbines “after careful consideration of all public comments and community concerns.”
The Mississippi Department of Environmental Quality stated that the permit followed a regulatory review process that included public comments and community input. Jaricus Whitlock, air division chief for the MDEQ, stated that the project met all applicable environmental standards.
“The proposed PSD permit in front of the board today not only meets all state and federal permitting regulations, but goes above and beyond what is required by law. MDEQ and the EPA agree that not a single person around our facilities will be exposed to unhealthy levels of air pollution,” Whitlock stated.
The planned facility will help provide electricity for xAI’s AI computing infrastructure in the Memphis region.
The Southaven project forms part of xAI’s efforts to scale computing capacity for its artificial intelligence systems.
The company currently operates two major data centers in Memphis, known as Colossus 1 and Colossus 2, which provide computing power for xAI’s Grok AI models. xAI is also planning to build another large data center in Southaven called Macrohardrr, which would be located in a warehouse previously used by GXO Logistics.
Large-scale AI training requires substantial computing power and electricity, prompting technology companies to develop dedicated energy infrastructure for their data centers.
SpaceX President Gwynne Shotwell previously stated that xAI plans to develop 1.2 gigawatts of power capacity for its Memphis-area AI supercomputer site as part of the federal government’s Ratepayer Protection Pledge. The commitment was announced during an event with United States President Donald Trump.
“As part of today’s commitment, we will take extensive additional steps to continue to reduce the costs of electricity for our neighbors. xAI will therefore commit to develop 1.2 GW of power as our supercomputer’s primary power source. That will be for every additional data center as well. We will expand what is already the largest global Megapack power installation in the world,” Shotwell said.
“The installation will provide enough backup power to power the city of Memphis, and more than sufficient energy to power the town of Southaven, Mississippi where the data center resides. We will build new substations and invest in electrical infrastructure to provide stability to the area’s grid.”
Elon Musk
Tesla China teases Optimus robot’s human-looking next-gen hands
The image was shared by Tesla AI’s account on Weibo and later reposted by Tesla community members on X.
A new teaser shared by Tesla’s China team appears to show a pair of unusually human-like hands for Optimus.
The image was shared by Tesla AI’s account on Weibo and later reposted by Tesla community members on X.
As could be seen in the teaser image, the new version of Optimus’ hands features proportions and finger structures that look strikingly similar to those of a human hand. Their appearance suggests that they might have dexterity approaching that of a human hand.
If the image reflects a new generation of Optimus’ hands, it could indicate Tesla is continuing to refine one of the most critical components of its humanoid robot.
Hands are widely viewed as one of the most difficult engineering challenges in robotics. For Optimus to perform complex real-world work, from manufacturing tasks to household activities, its hands would need to be the best in the industry.
Elon Musk has repeatedly described Optimus as Tesla’s most important long-term product. In posts on social media platform X, Musk has stated that Optimus could eventually become the first real-world Von Neumann machine.
In theory, a Von Neumann machine is a self-replicating system capable of building copies of itself using available materials. The concept was originally proposed by mathematician John von Neumann in the mid-20th century.
“Optimus will be the first Von Neumann machine, capable of building civilization by itself on any viable planet,” Musk wrote in a post on X.
If Optimus is expected to carry out complex work autonomously in the future, high levels of dexterity will likely be essential. This makes the development of advanced robotic hands a key step towards Musk’s long-term expectations for the product.