News
Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk
It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada.
Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1.
While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations.
Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release.
It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat.
Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp:
– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.
– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.
– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.
– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.
– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.
– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.
– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.
– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.
– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.
– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.
– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.
– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.
– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.
– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.
– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.
– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.
– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.
– Reduced latency when starting from a stop by accounting for lead vehicle jerk.
– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.
Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.
Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.
News
Tesla starts rolling out FSD V14.2.1 to AI4 vehicles including Cybertruck
FSD V14.2.1 was released just about a week after the initial FSD V14.2 update was rolled out.
It appears that the Tesla AI team burned the midnight oil, allowing them to release FSD V14.2.1 on Thanksgiving. The update has been reported by Tesla owners with AI4 vehicles, as well as Cybertruck owners.
For the Tesla AI team, at least, it appears that work really does not stop.
FSD V14.2.1
Initial posts about FSD V14.2.1 were shared by Tesla owners on social media platform X. As per the Tesla owners, V14.2.1 appears to be a point update that’s designed to polish the features and capacities that have been available in FSD V14. A look at the release notes for FSD V14.2.1, however, shows that an extra line has been added.
“Camera visibility can lead to increased attention monitoring sensitivity.”
Whether this could lead to more drivers being alerted to pay attention to the roads more remains to be seen. This would likely become evident as soon as the first batch of videos from Tesla owners who received V14.21 start sharing their first drive impressions of the update. Despite the update being released on Thanksgiving, it would not be surprising if first impressions videos of FSD V14.2.1 are shared today, just the same.
Rapid FSD releases
What is rather interesting and impressive is the fact that FSD V14.2.1 was released just about a week after the initial FSD V14.2 update was rolled out. This bodes well for Tesla’s FSD users, especially since CEO Elon Musk has stated in the past that the V14.2 series will be for “widespread use.”
FSD V14 has so far received numerous positive reviews from Tesla owners, with numerous drivers noting that the system now drives better than most human drivers because it is cautious, confident, and considerate at the same time. The only question now, really, is if the V14.2 series does make it to the company’s wide FSD fleet, which is still populated by numerous HW3 vehicles.
News
Waymo rider data hints that Tesla’s Cybercab strategy might be the smartest, after all
These observations all but validate Tesla’s controversial two-seat Cybercab strategy, which has caught a lot of criticism since it was unveiled last year.
Toyota Connected Europe designer Karim Dia Toubajie has highlighted a particular trend that became evident in Waymo’s Q3 2025 occupancy stats. As it turned out, 90% of the trips taken by the driverless taxis carried two or fewer passengers.
These observations all but validate Tesla’s controversial two-seat Cybercab strategy, which has caught a lot of criticism since it was unveiled last year.
Toyota designer observes a trend
Karim Dia Toubajie, Lead Product Designer (Sustainable Mobility) at Toyota Connected Europe, analyzed Waymo’s latest California Public Utilities Commission filings and posted the results on LinkedIn this week.
“90% of robotaxi trips have 2 or less passengers, so why are we using 5-seater vehicles?” Toubajie asked. He continued: “90% of trips have 2 or less people, 75% of trips have 1 or less people.” He accompanied his comments with a graphic showing Waymo’s occupancy rates, which showed 71% of trips having one passenger, 15% of trips having two passengers, 6% of trips having three passengers, 5% of trips having zero passengers, and only 3% of trips having four passengers.
The data excludes operational trips like depot runs or charging, though Toubajie pointed out that most of the time, Waymo’s massive self-driving taxis are really just transporting 1 or 2 people, at times even no passengers at all. “This means that most of the time, the vehicle being used significantly outweighs the needs of the trip,” the Toyota designer wrote in his post.
Cybercab suddenly looks perfectly sized
Toubajie gave a nod to Tesla’s approach. “The Tesla Cybercab announced in 2024, is a 2-seater robotaxi with a 50kWh battery but I still believe this is on the larger side of what’s required for most trips,” he wrote.
With Waymo’s own numbers now proving 90% of demand fits two seats or fewer, the wheel-less, lidar-free Cybercab now looks like the smartest play in the room. The Cybercab is designed to be easy to produce, with CEO Elon Musk commenting that its product line would resemble a consumer electronics factory more than an automotive plant. This means that the Cybercab could saturate the roads quickly once it is deployed.
While the Cybercab will likely take the lion’s share of Tesla’s ride-hailing passengers, the Model 3 sedan and Model Y crossover would be perfect for the remaining 9% of riders who require larger vehicles. This should be easy to implement for Tesla, as the Model Y and Model 3 are both mass-market vehicles.
Elon Musk
Elon Musk and James Cameron find middle ground in space and AI despite political differences
Musk responded with some positive words for the director on X.
Avatar director James Cameron has stated that he can still agree with Elon Musk on space exploration and AI safety despite their stark political differences.
In an interview with Puck’s The Town podcast, the liberal director praised Musk’s SpaceX achievements and said higher priorities must unite them, such as space travel and artificial intelligence. Musk responded with some positive words for the director on X.
A longtime mutual respect
Cameron and Musk have bonded over technology for years. As far back as 2011, Cameron told NBC News that “Elon is making very strong strides. I think he’s the likeliest person to step into the shoes of the shuttle program and actually provide human access to low Earth orbit. So… go, Elon.” Cameron was right, as SpaceX would go on to become the dominant force in spaceflight over the years.
Even after Musk’s embrace of conservative politics and his roles as senior advisor and former DOGE head, Cameron refused to cancel his relationship with the CEO. “I can separate a person and their politics from the things that they want to accomplish if they’re aligned with what I think are good goals,” Cameron said. Musk appreciated the director’s comments, stating that “Jim understands physics, which is rare in Hollywood.”
Shared AI warnings
Both men have stated that artificial intelligence could be an existential threat to humanity, though Musk has noted that Tesla’s products such as Optimus could usher in an era of sustainable abundance. Musk recently predicted that money and jobs could become irrelevant with advancing AI, while Cameron warned of a deeper crisis, as noted in a Fox News report.
“Because the overall risk of AI in general… is that we lose purpose as people. We lose jobs. We lose a sense of, ‘Well, what are we here for?’” Cameron said. “We are these flawed biological machines, and a computer can be theoretically more precise, more correct, faster, all of those things. And that’s going to be a threshold existential issue.”
He concluded: “I just think it’s important for us as a human civilization to prioritize. We’ve got to make this Earth our spaceship. That’s really what we need to be thinking.”
