News
Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk
It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada.
Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1.
While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations.
Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release.
It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat.
Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp:
– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.
– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.
– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.
– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.
– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.
– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.
– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.
– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.
– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.
– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.
– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.
– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.
– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.
– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.
– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.
– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.
– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.
– Reduced latency when starting from a stop by accounting for lead vehicle jerk.
– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.
Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.
Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.
News
Tesla Optimus dramatically collapses after teleoperator mishap
It seemed blatantly obvious that whoever was controlling the Optimus robot from behind the scenes did not disconnect their ability to manipulate its movements
Tesla Optimus dramatically collapsed after a teleoperator mishap at the company’s “Future of Autonomy Visualized” event in Miami this past weekend.
It seemed blatantly obvious that whoever was controlling the Optimus robot from behind the scenes did not disconnect their ability to manipulate its movements, then left the controls, causing Optimus to collapse.
A video captured at the event shows Optimus doing a movement similar to taking a headset off, likely what the teleoperator uses to hear guest requests and communicate with other staff:
🚨 Tesla Optimus mishap at the Miami event
To be fair, don’t we all want to do this around the Holidays? pic.twitter.com/EJ5QKenqQd
— TESLARATI (@Teslarati) December 8, 2025
After the headset removal motion was completed, Optimus simply collapsed backward, making for an interesting bit of conversation. While it was a mishap, it was actually pretty funny to watch because of the drama displayed by the robot in the situation.
This was obviously a mistake made by the teleoperator, and does not appear to be a spot where we can put any sort of blame on Optimus. It would have likely just stood there and waited for controls to resume if the teleoperator had disconnected from the robot correctly.
However, details are pretty slim, and Tesla has not announced anything explaining the situation, likely because it seems to be a pretty face-value event.
Tesla Optimus shows off its newest capability as progress accelerates
The Tesla Optimus program has been among the most hyped projects that the company has been working on, as CEO Elon Musk has extremely high hopes for what it could do for people on Earth. He has said on several occasions that Optimus should be the most popular product of all time, considering its capabilities.
Obviously, the project is still a work in progress, and growing pains are going to be part of the development of Optimus.
In its development of Optimus Gen 3, Tesla has been working on refining the forearm, hand, and fingers of Optimus, something that Musk said is extremely difficult. However, it’s a necessary step, especially if its capabilities will not be limited by hardware.
All in all, Optimus has still been a very successful project for Tesla, especially in the early stages. The company has done an excellent job of keeping Optimus busy, as it helps with serving customers at events and the Tesla Diner, and is also performing tasks across the company’s manufacturing plants.
News
Tesla 2025 Holiday Update: Here’s what it includes, and what it’s missing
Tesla has finally announced the features for the 2025 Holiday Update, which includes a wide variety of new inclusions that are both functional and just for fun.
The new features are plentiful, but there were a handful of things we were expecting to see based on what we know. We don’t want to sound ungrateful, because there are a lot of great new things on the way with this update.
Here’s what was included:
Grok with Navigation Commands (Beta)
Grok will now have the ability to add and edit navigation destinations, which is a drastic improvement considering Tesla owners had to use their standard voice commands for this in the past.

The utilization of Grok will likely improve the navigation experience by offering some insight into your destination, including reviews and other points of interest nearby.
It will be enabled by using Grok’s “Assistant” personality.
Tesla Photobooth
“Turn your car into a photobooth! Take selfies from inside your Tesla & give yourself a makeover with fun filters, stickers, and emojis. Share with others right from the Tesla app.”
This feature will be available within the Toybox.
Dog Mode Live Activity
When using Dog Mode to keep your four-legged friend comfortable in the car, you’ll now be able to check in on them as it will share periodic snapshots of the cabin, along with live updates on temperature, battery, and climate conditions.

Dashcam Viewer Update
Dashcam clips are awesome, but they’re void of a lot of information, which could be useful in some instances, especially if there is an accident.

Now, there will be additional details included on each Dashcam clip, like speed, steering wheel angle, and Self-Driving state.
Santa Mode
New graphics, trees, and a lock chime are now available.

Light Show Update
A new Light Show, called Jingle Rush, will be available.
Custom Wraps and License Plates in Colorizer
Colorizer will now be known as “Paint Shop” in the Toybox. You will now be able to personalize your Tesla Avatar with window tints, custom wraps, and license plates. Preloaded designs will be available, but owners will be able to use their USB Flash Drives to create one that suits their style.

Navigation Improvements
Changing the order of your destinations will be easier through a new “Favorites” tab, and Home and Work can now be set by dropping a pin.
There will also be “Suggested Destinations,” which will be determined through recent trips and habits while parked.
Supercharger Site Map
Perhaps the most significant feature of the Holiday Update, Tesla is adding a 3D view of select Tesla Superchargers by tapping “View Site Map.”
When navigating to a location with this capability, the site layout, live occupancy, and nearby amenities will be available. Drivers will also be able to choose which stall to Supercharge.

This is only available at a handful of locations currently, but it will expand to more Superchargers as it becomes more robust.
Automatic Carpool Lane Routing
Navigation will include an option to utilize carpool lanes. Your route will automatically choose the carpool lane when eligible.
Phone Left Behind Chime
If the in-car occupant detection system does not see anyone in the car and there is a phone key, or if a phone is left inside the cabin, your Tesla will chime a few seconds after the doors close.
Charge Limit Per Location
You can now save a charge limit for the current location while parked and it will be applied automatically the next time you charge there.
ISS Docking Simulator
In a SpaceX collaboration, Tesla has added this game to its in-car Arcade:
“Become an astronaut and prove your skills by docking with the International Space Station. Control & guide the rocket in this 3D docking simulator game using a set of controls based on actual interfaces used by NASA astronauts.”
Additional Improvements
-
Enable or disable wireless phone charging pads in Controls > Charging (S3XY) or Controls > Outlets & Mods (Cybertruck)
-
Add Spotify tracks to your queue right from the search screen & scroll through large Spotify playlists, albums, podcasts, audiobooks & your library seamlessly, without paging
-
Take the vibes up another level with rainbow colors during Rave Cave. Accent lights color will change along with the beats of your music. App Launcher > Toybox > Light Sync
-
Lock Sound now includes Light Cycle from Tron Mode. Toybox > Boombox > Lock Sound
What’s Missing
There are a handful of features we expected to see with the Holiday Update, but were not included.
Banish Feature
Tesla has been teasing the Banish functionality for quite a few years, but evidently, it is not quite there yet.
Banish will allow owners to get out of their vehicle at the entrance of their destination, and the car will go find a spot and park itself. Some refer to it as “Reverse Summon.”
Apple CarPlay
With all of the rumors regarding Apple CarPlay and then the evidence that Tesla was working to bring CarPlay to vehicles, we really expected it to come with the Holiday Update.
We’re not upset it’s not here, though. Tesla’s in-car UI is significantly better, at least in our opinion.
Parking Spot Selection
One of the biggest gripes about the new Arrival Features with Full Self-Driving v14 is that choosing a set parking spot is not available. This is especially frustrating for Tesla owners who rent or live in townhouse neighborhoods or apartment complexes with assigned parking.
Tesla seems to be working on this based on the release notes for v14.2, where it said future capabilities would include Parking Spot Selection.
News
Man credits Grok AI with saving his life after ER missed near-ruptured appendix
The AI flagged some of the man’s symptoms and urged him to return to the ER immediately and demand a CT scan.
A 49-year-old man has stated that xAI’s Grok ended up saving his life when the large language model identified a near-ruptured appendix that his first ER visit dismissed as acid reflux.
After being sent home from the ER, the man asked Grok to analyze his symptoms. The AI flagged some of the man’s symptoms and urged him to return immediately and demand a CT scan. The scan confirmed that something far worse than acid reflux was indeed going on.
Grok spotted what a doctor missed
In a post on Reddit, u/Tykjen noted that for 24 hours straight, he had a constant “razor-blade-level” abdominal pain that forced him into a fetal position. He had no fever or visible signs. He went to the ER, where a doctor pressed his soft belly, prescribed acid blockers, and sent him home.
The acid blockers didn’t work, and the man’s pain remained intense. He then decided to open a year-long chat he had with Grok and listed every detail that he was experiencing. The AI responded quickly. “Grok immediately flagged perforated ulcer or atypical appendicitis, told me the exact red-flag pattern I was describing, and basically said “go back right now and ask for a CT,” the man wrote in his post.
He copied Grok’s reasoning, returned to the ER, and insisted on the scan. The CT scan ultimately showed an inflamed appendix on the verge of rupture. Six hours later, the appendix was out. The man said the pain has completely vanished, and he woke up laughing under anesthesia. He was discharged the next day.
How a late-night conversation with Grok got me to demand the CT scan that saved my life from a ruptured appendix (December 2025)
byu/Tykjen ingrok
AI doctors could very well be welcomed
In the replies to his Reddit post, u/Tykjen further explained that he specifically avoided telling doctors that Grok, an AI, suggested he get a CT scan. “I did not tell them on the second visit that Grok recommended the CT scan. I had to lie. I told them my sister who’s a nurse told me to ask for the scan,” the man wrote.
One commenter noted that the use of AI in medicine will likely be welcomed, stating that “If AI could take doctors’ jobs one day, I will be happy. Doctors just don’t care anymore. It’s all a paycheck.” The Redditor replied with, “Sadly yes. That is what it felt like after the first visit. And the following night could have been my last.”
Elon Musk has been very optimistic about the potential of robots like Tesla Optimus in the medical field. Provided that they are able to achieve human-level articulation in their hands, and Tesla is able to bring down their cost through mass manufacturing, the era of AI-powered medical care could very well be closer than expected.