News
Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk
It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada.
Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1.
While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations.
Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release.
It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat.
Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp:
– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.
– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.
– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.
– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.
– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.
– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.
– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.
– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.
– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.
– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.
– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.
– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.
– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.
– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.
– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.
– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.
– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.
– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.
– Reduced latency when starting from a stop by accounting for lead vehicle jerk.
– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.
Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.
Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.
News
Even Tesla China is feeling the Optimus V3 fever
As per Tesla China, Optimus V3 is “about to be unveiled.”
Even Tesla China seems to have caught the Optimus V3 fever, with the electric vehicle maker teasing the impending arrival of the humanoid robot on its official Weibo account.
As per Tesla China, Optimus V3 is “about to be unveiled.”
Tesla China hypes up Optimus V3
Tesla China noted on its Weibo post that Optimus V3 is redesigned from first principles and is capable of learning new tasks by observing human behavior. The company has stated that it is targeting annual production capacity of up to one million humanoid robots once manufacturing scales.
During the Q4 and FY 2025 earnings call, CEO Elon Musk stated that Tesla will wind down Model S and Model X production to free up factory space for the pilot production line of Optimus V3.
Musk later noted that Giga Texas should have a significantly larger Optimus line, though that will produce Optimus V4. He also made it a point to set expectations with Optimus’ production ramp, stating that the “normal S curve of manufacturing ramp will be longer for Optimus.”

Tesla China’s potential role
Tesla’s decision to announce the Optimus update on Weibo highlights the importance of the humanoid robot in the company’s global operations. Giga Shanghai is already Tesla’s largest manufacturing hub by volume, and Musk has repeatedly described China’s manufacturers as Tesla’s most legitimate competitors.
While Tesla has not confirmed where Optimus V3 will be produced or deployed first, the scale and efficiency of Gigafactory Shanghai make it a plausible candidate for future humanoid robot manufacturing or in-factory deployment. Musk has also suggested that Optimus could become available for public purchase as early as 2027, as noted in a CNEV Post report.
“It’s going to be a very capable robot. I think long-term Optimus will have a very significant impact on the US GDP. It will actually move the needle on US GDP significantly. In conclusion, there are still many who doubt our ambitions for creating amazing abundance. We are confident it can be done, and we are making the right moves technologically to ensure that it does,” Musk said during the earnings call.
Elon Musk
Tesla director pay lawsuit sees lawyer fees slashed by $100 million
The ruling leaves the case’s underlying settlement intact while significantly reducing what the plaintiffs’ attorneys will receive.
The Delaware Supreme Court has cut more than $100 million from a legal fee award tied to a shareholder lawsuit challenging compensation paid to Tesla directors between 2017 and 2020.
The ruling leaves the case’s underlying settlement intact while significantly reducing what the plaintiffs’ attorneys will receive.
Delaware Supreme Court trims legal fees
As noted in a Bloomberg Law report, the case targeted pay granted to Tesla directors, including CEO Elon Musk, Oracle founder Larry Ellison, Kimbal Musk, and Rupert Murdoch. The Delaware Chancery Court had awarded $176 million to the plaintiffs. Tesla’s board must also return stock options and forego years worth of pay.
As per Chief Justice Collins J. Seitz Jr. in an opinion for the Delaware Supreme Court’s full five-member panel, however, the decision of the Delaware Chancery Court to award $176 million to a pension fund’s law firm “erred by including in its financial benefit analysis the intrinsic value” of options being returned by Tesla’s board.
The justices then reduced the fee award from $176 million to $70.9 million. “As we measure it, $71 million reflects a reasonable fee for counsel’s efforts and does not result in a windfall,” Chief Justice Seitz wrote.
Other settlement terms still intact
The Supreme Court upheld the settlement itself, which requires Tesla’s board to return stock and options valued at up to $735 million and to forgo three years of additional compensation worth about $184 million.
Tesla argued during oral arguments that a fee award closer to $70 million would be appropriate. Interestingly enough, back in October, Justice Karen L. Valihura noted that the $176 award was $60 million more than the Delaware judiciary’s budget from the previous year. This was quite interesting as the case was “settled midstream.”
The lawsuit was brought by a pension fund on behalf of Tesla shareholders and focused exclusively on director pay during the 2017–2020 period. The case is separate from other high-profile compensation disputes involving Elon Musk.
Elon Musk
SpaceX-xAI merger discussions in advanced stage: report
The update was initially reported by Bloomberg News, which cited people reportedly familiar with the matter.
SpaceX is reportedly in advanced discussions to merge with artificial intelligence startup xAI. The talks could reportedly result in an agreement as soon as this week, though discussions remain ongoing.
The update was initially reported by Bloomberg News, which cited people reportedly familiar with the matter.
SpaceX and xAI advanced merger talks
SpaceX and xAI have reportedly informed some investors about plans to potentially combine the two privately held companies, Bloomberg’s sources claimed. Representatives for both companies did not immediately respond to requests for comment.
A merger would unite two of the world’s largest private firms. xAI raised capital at a valuation of about $200 billion in September, while SpaceX was preparing a share sale late last year that valued the rocket company at roughly $800 billion.
If completed, the merger would bring together SpaceX’s launch and satellite infrastructure with xAI’s computing and model development. This could pave the way for Musk’s vision of deploying data centers in orbit to support large-scale AI workloads.
Musk’s broader consolidation efforts
Elon Musk has increasingly linked his companies around autonomy, AI, and space-based infrastructure. SpaceX is seeking regulatory approval to launch up to one million satellites as part of its long-term plans, as per a recent filing. Such a scale could support space-based computing concepts.
SpaceX has also discussed the feasibility of a potential tie-up with electric vehicle maker Tesla, Bloomberg previously reported. SpaceX has reportedly been preparing for a possible initial public offering (IPO) as well, which could value the company at up to $1.5 trillion. No timeline for SpaceX’s reported IPO plans have been announced yet, however.