News
Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk
It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada.
Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1.
While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations.
Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release.
It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat.
Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp:
– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.
– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.
– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.
– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.
– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.
– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.
– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.
– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.
– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.
– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.
– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.
– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.
– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.
– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.
– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.
– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.
– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.
– Reduced latency when starting from a stop by accounting for lead vehicle jerk.
– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.
Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.
Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.
Elon Musk
What is Digital Optimus? The new Tesla and xAI project explained
At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.
Tesla and xAI announced their groundbreaking joint project, Digital Optimus, also nicknamed “Macrohard” in a humorous jab at Microsoft, earlier this week.
This software-based AI agent is designed to automate complex office workflows by observing and replicating human interactions with computers. As the first major outcome of Tesla’s $2 billion investment in xAI, it represents a powerful fusion of hardware efficiency and advanced reasoning.
At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.
Macrohard or Digital Optimus is a joint xAI-Tesla project, coming as part of Tesla’s investment agreement with xAI.
Grok is the master conductor/navigator with deep understanding of the world to direct digital Optimus, which is processing and actioning the past 5 secs of…
— Elon Musk (@elonmusk) March 11, 2026
Tesla’s specialized AI acts as “System 1”—the fast, instinctive executor—processing the past five seconds of real-time computer screen video along with keyboard and mouse actions to perform immediate tasks.
xAI’s Grok model serves as “System 2,” the strategic “master conductor” or navigator, providing high-level reasoning, world understanding, and directional oversight, much like an advanced turn-by-turn navigation system.
When combined, the two can create a powerful AI-based assistant that can complete everything from accounting work to HR tasks.
Will Tesla join the fold? Predicting a triple merger with SpaceX and xAI
The system runs primarily on Tesla’s low-cost AI4 inference chip, minimizing expensive Nvidia resources from xAI for competitive, real-time performance.
Elon Musk described it as “the only real-time smart AI system” capable, in principle, of emulating the functions of entire companies, handling everything from accounting and HR to repetitive digital operations.
Timelines point to swift deployment. Announced just days ago, Musk expects Digital Optimus to be ready for user experience within about six months, targeting rollout around September 2026.
It will integrate into all AI4-equipped Tesla vehicles, enabling parked cars to handle office work during downtime. Millions of dedicated units are also planned for deployment at Supercharger stations, tapping into roughly 7 gigawatts of available power.
Oh and it works in all AI4-equipped cars, so your car can do office work for you when not driving.
We’re also deploying millions of dedicated Digital Optimus units in the field at Superchargers where we have ~7 gigawatts of available power.
— Elon Musk (@elonmusk) March 12, 2026
Digital Optimus directly supports Tesla’s broader autonomy strategy. It leverages the same end-to-end neural networks, computer vision, and real-time decision-making tech that power Full Self-Driving (FSD) software and the physical Optimus humanoid robot.
By repurposing idle vehicle compute and extending AI4 hardware beyond driving, the project scales Tesla’s autonomy ecosystem from roads to digital workspaces.
As a virtual counterpart to physical Optimus, it divides labor: software agents manage screen-based tasks while humanoid robots tackle physical ones, accelerating Tesla’s vision of general-purpose AI for productivity, Robotaxi fleets, and beyond.
In essence, Digital Optimus bridges Tesla’s vehicle and robotics autonomy with enterprise-scale AI, promising massive efficiency gains. No other company currently matches its real-time capabilities on such accessible hardware.
It really could be one of the most crucial developments Tesla and xAI begin to integrate, as it could revolutionize how people work and travel.
News
Tesla adds awesome new driving feature to Model Y
Tesla is rolling out a new “Comfort Braking” feature with Software Update 2026.8. The feature is exclusive to the new Model Y, and is currently unavailable for any other vehicle in the Tesla lineup.
Tesla is adding an awesome new driving feature to Model Y vehicles, effective on Juniper-updated models considered model year 2026 or newer.
Tesla is rolling out a new “Comfort Braking” feature with Software Update 2026.8. The feature is exclusive to the new Model Y, and is currently unavailable for any other vehicle in the Tesla lineup.
Tesla writes in the release notes for the feature:
“Your Tesla now provides a smoother feel as you come to a complete stop during routine braking.”
🚨 Tesla has added a new “Comfort Braking” update with 2026.8
“Your Tesla provides a smoother feel as you come to a complete stop during routine braking.” https://t.co/afqCpBSVeA pic.twitter.com/C6MRmzfzls
— TESLARATI (@Teslarati) March 13, 2026
Interestingly, we’re not too sure what catalyzed Tesla to try to improve braking smoothness, because it hasn’t seemed overly abrupt or rough from my perspective. Although the brake pedal in my Model Y is rarely used due to Regenerative Braking, it seems Tesla wanted to try to make the ride comfort even smoother for owners.
There is always room for improvement, though, and it seems that there is a way to make braking smoother for passengers while the vehicle is coming to a stop.
This is far from the first time Tesla has attempted to improve its ride comfort through Over-the-Air updates, as it has rolled out updates to improve regenerative braking performance, handling while using Full Self-Driving, improvements to Steer-by-Wire to Cybertruck, and even recent releases that have combatted Active Road Noise.
Tesla holds a unique ability to change the functionality of its vehicles through software updates, which have come in handy for many things, including remedying certain recalls and shipping new features to the Full Self-Driving suite.
Tesla seems to have the most seamless OTA processes, as many automakers have the ability to ship improvements through a simple software update.
We’re really excited to test the update, so when we get an opportunity to try out Comfort Braking when it makes it to our Model Y.
News
Tesla finally brings a Robotaxi update that Android users will love
The breakdown of the software version shows that Tesla is actively developing an Android-compatible version of the Robotaxi app, and the company is developing Live Activities for Android.
Tesla is finally bringing an update of its Robotaxi platform that Android users will love — mostly because it seems like they will finally be able to use the ride-hailing platform that the company has had active since last June.
Based on a decompile of software version 26.2.0 of the Robotaxi app, Tesla looks to be ready to roll out access to Android users.
According to the breakdown, performed by Tesla App Updates, the company is preparing to roll out an Android version of the app as it is developing several features for that operating system.
🚨 It looks like Tesla is preparing to launch the Robotaxi app for Android users at last!
A decompile of v26.2.0 of the Robotaxi app shows some progress on the Android side for Robotaxi 🤖 🚗 https://t.co/mThmoYuVLy
— TESLARATI (@Teslarati) March 13, 2026
The breakdown of the software version shows that Tesla is actively developing an Android-compatible version of the Robotaxi app, and the company is developing Live Activities for Android:
“Strings like notification_channel_robotaxid_trip_name and android_native_alicorn_eta_text show exactly how Tesla plans to replicate the iOS Live Activities experience. Instead of standard push alerts, Android users are getting a persistent, dynamically updating notification channel.”
This is a big step forward for several reasons. From a face-value perspective, Tesla is finally ready to offer Robotaxi to Android users.
The company has routinely prioritized Apple releases because there is a higher concentration of iPhone users in its ownership base. Additionally, the development process for Apple is simply less laborious.
Tesla is working to increase Android capabilities in its vehicles
Secondly, the Robotaxi rollout has been a typical example of “slowly then all at once.”
Tesla initially released Robotaxi access to a handful of media members and influencers. Eventually, it was expanded to more users, so that anyone using an iOS device could download the app and hail a semi-autonomous ride in Austin or the Bay Area.
Opening up the user base to Android users may show that Tesla is preparing to allow even more users to utilize its Robotaxi platform, and although it seems to be a few months away from only offering fully autonomous rides to anyone with app access, the expansion of the user base to an entirely different user base definitely seems like its a step in the right direction.