Connect with us
tesla-fsd-beta-price-15k-10.69-wide-release tesla-fsd-beta-price-15k-10.69-wide-release

News

Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk

Credit: Whole Mars Catalog

Published

on

It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada. 

Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1. 

While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations. 

Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release. 

It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat. 

Advertisement
-->

Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp

– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.

– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.

– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.

– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.

Advertisement
-->

– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.

– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.

– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.

– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.

– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.

Advertisement
-->

– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.

– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.

– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.

– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.

– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.

Advertisement
-->

– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.

– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.

– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.

– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.

– Reduced latency when starting from a stop by accounting for lead vehicle jerk.

Advertisement
-->

– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.

Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.

Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Tesla’s Elon Musk: 10 billion miles needed for safe Unsupervised FSD

As per the CEO, roughly 10 billion miles of training data are required due to reality’s “super long tail of complexity.” 

Published

on

Credit: @BLKMDL3/X

Tesla CEO Elon Musk has provided an updated estimate for the training data needed to achieve truly safe unsupervised Full Self-Driving (FSD). 

As per the CEO, roughly 10 billion miles of training data are required due to reality’s “super long tail of complexity.” 

10 billion miles of training data

Musk comment came as a reply to Apple and Rivian alum Paul Beisel, who posted an analysis on X about the gap between tech demonstrations and real-world products. In his post, Beisel highlighted Tesla’s data-driven lead in autonomy, and he also argued that it would not be easy for rivals to become a legitimate competitor to FSD quickly. 

“The notion that someone can ‘catch up’ to this problem primarily through simulation and limited on-road exposure strikes me as deeply naive. This is not a demo problem. It is a scale, data, and iteration problem— and Tesla is already far, far down that road while others are just getting started,” Beisel wrote. 

Musk responded to Beisel’s post, stating that “Roughly 10 billion miles of training data is needed to achieve safe unsupervised self-driving. Reality has a super long tail of complexity.” This is quite interesting considering that in his Master Plan Part Deux, Elon Musk estimated that worldwide regulatory approval for autonomous driving would require around 6 billion miles. 

Advertisement
-->

FSD’s total training miles

As 2025 came to a close, Tesla community members observed that FSD was already nearing 7 billion miles driven, with over 2.5 billion miles being from inner city roads. The 7-billion-mile mark was passed just a few days later. This suggests that Tesla is likely the company today with the most training data for its autonomous driving program. 

The difficulties of achieving autonomy were referenced by Elon Musk recently, when he commented on Nvidia’s Alpamayo program. As per Musk, “they will find that it’s easy to get to 99% and then super hard to solve the long tail of the distribution.” These sentiments were echoed by Tesla VP for AI software Ashok Elluswamy, who also noted on X that “the long tail is sooo long, that most people can’t grasp it.”

Continue Reading

News

Tesla earns top honors at MotorTrend’s SDV Innovator Awards

MotorTrend’s SDV Awards were presented during CES 2026 in Las Vegas.

Published

on

Credit: Tesla China

Tesla emerged as one of the most recognized automakers at MotorTrend’s 2026 Software-Defined Vehicle (SDV) Innovator Awards.

As could be seen in a press release from the publication, two key Tesla employees were honored for their work on AI, autonomy, and vehicle software. MotorTrend’s SDV Awards were presented during CES 2026 in Las Vegas.

Tesla leaders and engineers recognized

The fourth annual SDV Innovator Awards celebrate pioneers and experts who are pushing the automotive industry deeper into software-driven development. Among the most notable honorees for this year was Ashok Elluswamy, Tesla’s Vice President of AI Software, who received a Pioneer Award for his role in advancing artificial intelligence and autonomy across the company’s vehicle lineup.

Tesla also secured recognition in the Expert category, with Lawson Fulton, a staff Autopilot machine learning engineer, honored for his contributions to Tesla’s driver-assistance and autonomous systems.

Tesla’s software-first strategy

While automakers like General Motors, Ford, and Rivian also received recognition, Tesla’s multiple awards stood out given the company’s outsized role in popularizing software-defined vehicles over the past decade. From frequent OTA updates to its data-driven approach to autonomy, Tesla has consistently treated vehicles as evolving software platforms rather than static products.

Advertisement
-->

This has made Tesla’s vehicles very unique in their respective sectors, as they are arguably the only cars that objectively get better over time. This is especially true for vehicles that are loaded with the company’s Full Self-Driving system, which are getting progressively more intelligent and autonomous over time. The majority of Tesla’s updates to its vehicles are free as well, which is very much appreciated by customers worldwide.

Continue Reading

Elon Musk

Judge clears path for Elon Musk’s OpenAI lawsuit to go before a jury

The decision maintains Musk’s claims that OpenAI’s shift toward a for-profit structure violated early assurances made to him as a co-founder.

Published

on

Gage Skidmore, CC BY-SA 4.0 , via Wikimedia Commons

A U.S. judge has ruled that Elon Musk’s lawsuit accusing OpenAI of abandoning its founding nonprofit mission can proceed to a jury trial. 

The decision maintains Musk’s claims that OpenAI’s shift toward a for-profit structure violated early assurances made to him as a co-founder. These claims are directly opposed by OpenAI.

Judge says disputed facts warrant a trial

At a hearing in Oakland, U.S. District Judge Yvonne Gonzalez Rogers stated that there was “plenty of evidence” suggesting that OpenAI leaders had promised that the organization’s original nonprofit structure would be maintained. She ruled that those disputed facts should be evaluated by a jury at a trial in March rather than decided by the court at this stage, as noted in a Reuters report.

Musk helped co-found OpenAI in 2015 but left the organization in 2018. In his lawsuit, he argued that he contributed roughly $38 million, or about 60% of OpenAI’s early funding, based on assurances that the company would remain a nonprofit dedicated to the public benefit. He is seeking unspecified monetary damages tied to what he describes as “ill-gotten gains.”

OpenAI, however, has repeatedly rejected Musk’s allegations. The company has stated that Musk’s claims were baseless and part of a pattern of harassment.

Advertisement
-->

Rivalries and Microsoft ties

The case unfolds against the backdrop of intensifying competition in generative artificial intelligence. Musk now runs xAI, whose Grok chatbot competes directly with OpenAI’s flagship ChatGPT. OpenAI has argued that Musk is a frustrated commercial rival who is simply attempting to slow down a market leader.

The lawsuit also names Microsoft as a defendant, citing its multibillion-dollar partnerships with OpenAI. Microsoft has urged the court to dismiss the claims against it, arguing there is no evidence it aided or abetted any alleged misconduct. Lawyers for OpenAI have also pushed for the case to be thrown out, claiming that Musk failed to show sufficient factual basis for claims such as fraud and breach of contract.

Judge Gonzalez Rogers, however, declined to end the case at this stage, noting that a jury would also need to consider whether Musk filed the lawsuit within the applicable statute of limitations. Still, the dispute between Elon Musk and OpenAI is now headed for a high-profile jury trial in the coming months.

Continue Reading