Connect with us
tesla-fsd-beta-price-15k-10.69-wide-release tesla-fsd-beta-price-15k-10.69-wide-release

News

Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk

Credit: Whole Mars Catalog

Published

on

It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada. 

Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1. 

While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations. 

Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release. 

Advertisement

It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat. 

Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp

– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.

– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.

Advertisement

– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.

– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.

– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.

– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.

Advertisement

– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.

– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.

– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.

– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.

Advertisement

– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.

– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.

– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.

– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.

Advertisement

– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.

– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.

– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.

– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.

Advertisement

– Reduced latency when starting from a stop by accounting for lead vehicle jerk.

– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.

Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.

Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Tesla confirmed HW3 can’t do Unsupervised FSD but there’s more to the story

Tesla confirmed HW3 vehicles cannot run unsupervised FSD, replacing its free upgrade promise with a discounted trade-in.

Published

on

By

tesla autopilot

Tesla has officially confirmed that early vehicles with its Autopilot Hardware 3 (HW3) will not be capable of unsupervised Full Self-Driving, while extending a path forward for legacy owners through a discounted trade-in program. The announcement came by way of Elon Musk in today’s Tesla Q1 2026 earnings call.

The history here matters. HW3 launched in April 2019, and Tesla sold Full Self-Driving packages to owners on the understanding that the hardware was sufficient for full autonomy. Some owners paid between $8,000 and $15,000 for FSD during that period. For years, as FSD’s AI models grew more demanding, HW3 vehicles fell progressively further behind, eventually landing on FSD v12.6 in January 2025 while AI4 vehicles moved to v13 and then v14. When Musk acknowledged in January 2025 that HW3 simply could not reach unsupervised operation, and alluded to a difficult hardware retrofit.

The near-term offering is more concrete. Tesla’s head of Autopilot Ashok Elluswamy confirmed on today’s call that a V14-lite will be coming to HW3 vehicles in late June, bringing all the V14 features currently running on AI4 hardware. That is a meaningful software update for owners who have been frozen at v12.6 for over a year, and it represents genuine effort to keep older hardware relevant. Unsupervised FSD for vehicles is now targeted for Q4 2026 at the earliest, with Musk describing it as a gradual, geography-limited rollout.

For HW3 owners, the over-the-air V14-lite update is welcomed, and the discounted trade-in path at least acknowledges an old obligation. What happens next with the trade-in pricing will define how this chapter ultimately gets written. If Tesla prices the hardware path fairly, acknowledges what early adopters are owed, and delivers V14-lite on the June timeline it committed to today, it has a real opportunity to convert one of the longest-running sore subjects among early adopters into a loyalty story.

Continue Reading

Elon Musk

Tesla isn’t joking about building Optimus at an industrial scale: Here we go

Tesla’s Optimus factory in Texas targets 10 million robots yearly, with 5.2 million square feet under construction.

Published

on

By

Tesla’s Q1 2026 Update Letter, released today, confirms that first generation Optimus production lines are now well underway at its Fremont, California factory, with a pilot line targeting one million robots per year to start. Of bigger note is a shared aerial image of a large piece of land adjacent to Gigafactory Texas, that Tesla has prominently labeled “Optimus factory site preparation.”

Permit documents show Tesla is seeking to add over 5.2 million square feet of new building space to the Giga Texas North Campus by the end of 2026, at an estimated construction investment of $5 billion to $10 billion. The longer term production target for that facility is 10 million Optimus units per year. Giga Texas already sits on 2,500 acres with over 10 million square feet of existing factory floor, and the North Campus expansion is being built to support multiple projects, including the dedicated Optimus factory, the Terafab chip fabrication facility (a joint Tesla/SpaceX/xAI venture), a Cybercab test track, road infrastructure, and supporting facilities.

Credit: TESLA

Texas makes strategic sense beyond the existing infrastructure. The state’s tax structure, lower labor costs relative to California, and the proximity to Tesla’s AI training cluster Cortex 1 and 2, both located at Giga Texas and now totaling over 230,000 H100 equivalent GPUs, means the Optimus software stack and the factory producing the hardware will share the same campus. Tesla’s Q1 report also confirmed completion of the AI5 chip tape out in April, the inference processor designed specifically to power Optimus units in the field.

As Teslarati reported, the Texas facility is intended to house Optimus V4 production at full scale. Musk told the World Economic Forum in January that Tesla plans to sell Optimus to the public by end of 2027 at a price between $20,000 and $30,000, stating, “I think everyone on earth is going to have one and want one.” He has previously pegged long term demand for general purpose humanoid robots at over 20 billion units globally, citing both consumer and industrial use cases.

Continue Reading

Investor's Corner

Tesla (TSLA) Q1 2026 earnings results: beat on EPS and revenues

Published

on

Credit: Tesla

Tesla (NASDAQ: TSLA) reported its earnings for the first quarter of 2026 on Wednesday afternoon. Here’s what the company reported compared to what Wall Street analysts expected.

The earnings results come after Tesla reported a miss on vehicle deliveries for the first quarter, delivering 358,023 vehicles and building 408,386 cars during the three-month span.

As Tesla transitions more toward AI and sees itself as less of a car company, expectations for deliveries will begin to become less of a central point in the consensus of how the quarter is perceived.

Nevertheless, Tesla is leaning on its strong foundation as a car company to carry forward its AI ambitions. The first quarter is a good ground layer for the rest of the year.

Tesla Q1 2026 Earnings Results

Tesla’s Earnings Results are as follows:

  • Non-GAAP EPS – $0.41 Reported vs. $0.36 Expected
  • Revenues – $22.387 billion vs. $22.35 billion Expected
  • Free Cash Flow – $1.444 billion
  • Profit – $4.72 billion

Tesla beat analyst expectations, so it will be interesting to see how the stock responds. IN the past, we’ve seen Tesla beat analyst expectations considerably, followed by a sharp drop in stock price.

On the same token, we’ve seen Tesla miss and the stock price go up the following trading session.

Tesla will hold its Q1 2026 Earnings Call in about 90 minutes at 5:30 p.m. on the East Coast. Remarks will be made by CEO Elon Musk and other executives, who will shed some light on the investor questions that we covered earlier this week.

You can stream it below. Additionally, we will be doing our Live Blog on X and Facebook.

Continue Reading