Connect with us
tesla-fsd-beta-price-15k-10.69-wide-release tesla-fsd-beta-price-15k-10.69-wide-release

News

Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk

Credit: Whole Mars Catalog

Published

on

It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada. 

Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1. 

While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations. 

Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release. 

Advertisement

It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat. 

Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp

– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.

– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.

Advertisement

– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.

– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.

– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.

– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.

Advertisement

– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.

– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.

– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.

– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.

Advertisement

– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.

– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.

– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.

– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.

Advertisement

– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.

– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.

– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.

– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.

Advertisement

– Reduced latency when starting from a stop by accounting for lead vehicle jerk.

– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.

Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.

Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

News

Tesla Cybercab display highlights interior wizardry in the small two-seater

Photos and videos of the production Cybercab were shared in posts on social media platform X.

Published

on

Credit: Tesla Robotaxi/X

The Tesla Cybercab is currently on display at the U.S. Department of Transportation in Washington, D.C., and observations of the production vehicle are highlighting some of its notable design details. 

Photos and videos of the production Cybercab were shared in posts on social media platform X.

Observers of the Cybercab display unit noted that the two-seat Robotaxi provides unusually generous legroom for a vehicle of its size. Based on the vehicle’s video, the compact two-seater appears to offer more legroom than Tesla’s larger vehicles such as the Model Y, Model X, and Cybertruck.

The Cybercab’s layout allows Tesla to dedicate nearly the entire cabin to passengers. The vehicle is designed without a steering wheel or pedals, which helps maximize interior space.

Advertisement

Footage from the display also highlights the Cybercab’s large center screen, which is positioned prominently in front of the passenger bench. The display appears intended to provide entertainment and ride information while the vehicle operates autonomously.

Images of the vehicle also show an additional camera integrated into the Cybercab’s C-pillar. The extra camera appears to expand the vehicle’s field of view, which would be useful as Tesla works toward fully unsupervised Full Self-Driving.

Tesla engineers have previously explained that the Cybercab was designed to be highly efficient both in manufacturing and in operation. Cybercab Lead Engineer Eric E. stated in 2024 that the Robotaxi would be built with roughly half the number of parts used in a Model 3 sedan.

“Two seats unlocks a lot of opportunity aerodynamically. It also means we cut the part count of Cybercab down by a substantial margin. We’re gonna be delivering a car that has roughly half the parts of Model 3 today,” the Tesla engineer said.

Advertisement

The Tesla engineer also noted that the Cybercab’s cargo area can accommodate multiple golf bags, two carry-on suitcases, and two full-size checked bags. The trunk can also fit certain bicycles and a foldable wheelchair depending on size, which is quite impressive for a small car like the Cybercab.

Continue Reading

Elon Musk

Elon Musk’s xAI wins permit for power plant supporting AI data centers

The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).

Published

on

Mississippi regulators have approved a permit allowing Elon Musk’s artificial intelligence company xAI to construct a natural gas power plant in Southaven. The facility is expected to support the company’s expanding AI infrastructure tied to its Colossus data center operations near Memphis.

The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).

According to the report, regulators “voted to approve the permit” of xAI subsidiary MZX Tech LLC to construct a power plant featuring 41 natural gas-burning turbines “after careful consideration of all public comments and community concerns.”

The Mississippi Department of Environmental Quality stated that the permit followed a regulatory review process that included public comments and community input. Jaricus Whitlock, air division chief for the MDEQ, stated that the project met all applicable environmental standards.

Advertisement

“The proposed PSD permit in front of the board today not only meets all state and federal permitting regulations, but goes above and beyond what is required by law. MDEQ and the EPA agree that not a single person around our facilities will be exposed to unhealthy levels of air pollution,” Whitlock stated.

The planned facility will help provide electricity for xAI’s AI computing infrastructure in the Memphis region.

The Southaven project forms part of xAI’s efforts to scale computing capacity for its artificial intelligence systems.

The company currently operates two major data centers in Memphis, known as Colossus 1 and Colossus 2, which provide computing power for xAI’s Grok AI models. xAI is also planning to build another large data center in Southaven called Macrohardrr, which would be located in a warehouse previously used by GXO Logistics.

Advertisement

Large-scale AI training requires substantial computing power and electricity, prompting technology companies to develop dedicated energy infrastructure for their data centers.

SpaceX President Gwynne Shotwell previously stated that xAI plans to develop 1.2 gigawatts of power capacity for its Memphis-area AI supercomputer site as part of the federal government’s Ratepayer Protection Pledge. The commitment was announced during an event with United States President Donald Trump.

“As part of today’s commitment, we will take extensive additional steps to continue to reduce the costs of electricity for our neighbors. xAI will therefore commit to develop 1.2 GW of power as our supercomputer’s primary power source. That will be for every additional data center as well. We will expand what is already the largest global Megapack power installation in the world,” Shotwell said.

“The installation will provide enough backup power to power the city of Memphis, and more than sufficient energy to power the town of Southaven, Mississippi where the data center resides. We will build new substations and invest in electrical infrastructure to provide stability to the area’s grid.”

Advertisement
Continue Reading

Elon Musk

Tesla China teases Optimus robot’s human-looking next-gen hands

The image was shared by Tesla AI’s account on Weibo and later reposted by Tesla community members on X.

Published

on

Credit: Tesla China

A new teaser shared by Tesla’s China team appears to show a pair of unusually human-like hands for Optimus. 

The image was shared by Tesla AI’s account on Weibo and later reposted by Tesla community members on X.

As could be seen in the teaser image, the new version of Optimus’ hands features proportions and finger structures that look strikingly similar to those of a human hand. Their appearance suggests that they might have dexterity approaching that of a human hand.

If the image reflects a new generation of Optimus’ hands, it could indicate Tesla is continuing to refine one of the most critical components of its humanoid robot.

Advertisement

Hands are widely viewed as one of the most difficult engineering challenges in robotics. For Optimus to perform complex real-world work, from manufacturing tasks to household activities, its hands would need to be the best in the industry.

Elon Musk has repeatedly described Optimus as Tesla’s most important long-term product. In posts on social media platform X, Musk has stated that Optimus could eventually become the first real-world Von Neumann machine.

In theory, a Von Neumann machine is a self-replicating system capable of building copies of itself using available materials. The concept was originally proposed by mathematician John von Neumann in the mid-20th century.

“Optimus will be the first Von Neumann machine, capable of building civilization by itself on any viable planet,” Musk wrote in a post on X.

Advertisement

If Optimus is expected to carry out complex work autonomously in the future, high levels of dexterity will likely be essential. This makes the development of advanced robotic hands a key step towards Musk’s long-term expectations for the product.

Continue Reading