Connect with us
tesla-fsd-beta-price-15k-10.69-wide-release tesla-fsd-beta-price-15k-10.69-wide-release

News

Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk

Credit: Whole Mars Catalog

Published

on

It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada. 

Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1. 

While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations. 

Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release. 

Advertisement

It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat. 

Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp

– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.

– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.

Advertisement

– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.

– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.

– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.

– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.

Advertisement

– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.

– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.

– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.

– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.

Advertisement

– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.

– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.

– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.

– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.

Advertisement

– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.

– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.

– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.

– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.

Advertisement

– Reduced latency when starting from a stop by accounting for lead vehicle jerk.

– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.

Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.

Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

News

Tesla’s troublesome Auto Wipers get a major upgrade

Tesla has quietly deployed a major over-the-air (OTA) update across its entire fleet, implementing a new patent that could finally solve one of the most complained-about features in its vehicles: the Auto Wipers.

Published

on

One of Tesla’s most complained-about features is that of the Auto Wipers, but they have recently received a major upgrade that impacts every vehicle in the company’s fleet, a company executive confirmed.

Tesla has quietly deployed a major over-the-air (OTA) update across its entire fleet, implementing a new patent that could finally solve one of the most complained-about features in its vehicles: the Auto Wipers.

Confirmed by senior Tesla AI engineer Yun-Ta Tsai on April 10, the improvement is based on patent US 20260097742 A1. It introduces an “energy balance model” that adds a tactile, physics-driven layer to the existing camera-based system—without requiring any new hardware.

Tesla drivers have griped about auto wipers since the company ditched traditional rain sensors in favor of Tesla Vision around 2018.

Owners routinely report the wipers failing to activate in light drizzle or mist, leaving windshields streaked and visibility dangerously reduced. Just as often, they formerly blasted into high-speed mode on dry, sunny days, screeching across glass and risking scratches or premature blade wear.

This is a rare occurrence anymore, but many owners still report the feature having the wipers perform at the incorrect speed or frequency when precipitation is falling.

Tesla has tried repeatedly to fix the problem through software alone.

Early “Deep Rain” initiatives and the 2023 Autowiper v4 update used multi-camera video and refined neural networks, with Elon Musk promising “super good” performance. The 2024.14 update added manual sensitivity boosts, and later FSD versions claimed further gains. Yet complaints persisted.

Elon Musk apologizes for Tesla’s quirky auto wipers, hints at improvements

Vision systems struggle with edge cases—glare, bugs, reflections, or faint mist—because they rely purely on visual inference rather than physical detection

The new patent takes a different approach. The car’s computer constantly measures electrical power delivered to the wiper motor. It subtracts predictable losses—internal motor friction, linkage drag, and aerodynamic resistance—leaving only the friction force between the rubber blade and windshield glass.

Water lubricates the glass, sharply reducing friction; dry or icy surfaces increase it dramatically. This real-time “tactile” data acts as an independent check on the camera’s visual cues, instantly shutting down false triggers on dry glass and fine-tuning speed for actual rain.

The system can also detect ice and auto-activate defrost heaters, while long-term friction trends alert drivers when blades need replacing.

By fusing vision with precise motor-load physics, Tesla has created a hybrid sensor that is both elegant and cost-free. Owners have waited years for reliable auto wipers; this OTA rollout may finally deliver them.

Continue Reading

News

Tesla Roadster unveiling set for this month: what to expect

As Tesla finally edges toward production and an updated reveal, enthusiasts aren’t asking for compromises; they’re demanding the original vision be honored. Here are five clear expectations that will come with the vehicle’s unveiling, which is still set for later this month, hopefully.

Published

on

Tesla Roadster at Tesla Battery Day 2020 Credit: @BLKMDL3 | Twitter

The Tesla Roadster has been the ultimate carrot on a stick since its 2017 unveiling. Promised as the fastest production car ever made, with 0-60 mph in under two seconds and a top speed over 250 mph, it has endured years of delays.

As Tesla finally edges toward production and an updated reveal, enthusiasts aren’t asking for compromises; they’re demanding the original vision be honored. Here are five clear expectations that will come with the vehicle’s unveiling, which is still set for later this month, hopefully.

 Performance and Safety Do Not Go Hand in Hand, and That’s the Point

The Roadster is not a family sedan or a daily commuter. It is a no-holds-barred supercar meant to embarrass six-figure exotics on track days. Tesla should resist the temptation to load it with every passive-safety nanny and electronic guardian that dulls the raw feedback drivers crave.

Owners want to feel the road, not be shielded from it. Strip away unnecessary electronic limits so the car can deliver the visceral thrill Elon Musk originally described. Safety ratings will still be strong because of Tesla’s structural excellence, but the Roadster’s mission is speed, not coddling.

He said late last year:

“This is not a…safety is not the main goal. If you buy a Ferrari, safety is not the number one goal. I say, if safety is your number one goal, do not buy the Roadster…We’ll aspire not to kill anyone in this car. It’ll be the best of the last of the human-driven cars. The best of the last.”

Musk was clear that this will not be a car that will be the safest in Tesla’s lineup, but that’s the point. It’s not made for anything other than pushing the limits.

Tesla Needs to Come Through on a HUGE Feature

The Roadster unveiling would be wildly disappointing if it were only capable of driving. Tesla has long teased the potential ability to float or hover, and they need to come through on something that is along those lines.

The SpaceX cold-gas thruster package was never a joke. Musk, at one time, explicitly said owners could opt for a set of thrusters capable of lifting the car off the ground for short hops or dramatic launches. That feature is what separates the Roadster from every other hypercar on the planet.

If the production version arrives without it—or with a watered-down “maybe later” version—enthusiasts will feel betrayed. Deliver the thrusters, make them functional, and let the Roadster literally hover above the competition.

An Updated Design Might Be Warranted

It’s been nine years since Tesla first rolled off the next-gen Roadster design and showed it to the world.

The 2017 concept still looks sharp, but eight years is an eternity in automotive styling. The sharp lines and aggressive stance now compete against the angular Cybertruck and the next-generation vehicles rolling out of Fremont and Austin.

Tesla Roadster patent hints at radical seat redesign ahead of reveal

A subtle refresh, maybe with sharper headlights, revised aero elements, and modern materials, would keep the Roadster feeling current without losing its identity. Fans don’t want a complete redesign, just enough evolution to prove Tesla still cares.

Self-Driving Isn’t a Necessity for the Tesla Roadster

Full Self-Driving hardware and software belong in the Model 3, Model Y, and the upcoming robotaxi—not in a two-seat rocket built for canyon carving. The Roadster’s entire appeal is the direct connection between driver, steering wheel, and asphalt.

Offering FSD as standard would dilute the purity that separates it from every other Tesla. Make autonomy an optional delete or simply omit it. Let the Roadster remain the purest driving machine in the lineup, because that’s what it is all about.

Tesla Needs to Come Through on the Unveiling Timeline

The last thing Tesla needs right now is another complaint about not hitting timelines or expectations. This unveiling has already been pushed back one time, from April 1 to “probably in late April.”

Repeated delays have tested even the most patient fans. Whatever date the company now sets for the next major reveal or start of production must be met. No more “next year” promises. The Roadster has waited long enough. When it finally arrives, it must feel worth every extra month.

If Tesla hits these five marks, the Roadster won’t just be another fast car—it will be the machine that redefines what a Tesla can be. The world is watching.

Continue Reading

News

Tesla Cabin Camera gets an incredible new feature for added driver safety

The company quietly expanded the capabilities of its in-cabin camera with the rollout of Software Update 2026.8.6. Tesla hacker greentheonly revealed that coding for the software version provides details on now tracking the age of the driver.

Published

on

tesla cabin facing camera
Tesla's Cabin-facing camera is used to monitor driver attentiveness. (Credit: Andy Slye/YouTube)

Tesla’s interior Cabin-facing Camera just got a brand new feature that is an incredible addition, as it provides yet another layer of added safety.

The company quietly expanded the capabilities of its in-cabin camera with the rollout of Software Update 2026.8.6. Tesla hacker greentheonly revealed that coding for the software version provides details on now tracking the age of the driver.

The camera, which is positioned just above the rearview mirror, is now performing facial analysis to estimate the driver’s age. While not yet user-facing, the feature is the latest example of Tesla’s ongoing push to refine its driver monitoring system for both everyday safety and future Robotaxi operations.

The cabin camera already processes images entirely onboard the vehicle for privacy, sharing data with Tesla only if owners enable it during safety-critical events.

Age estimation likely uses computer vision to classify facial features, similar to existing attention-tracking algorithms. Potential applications include preventing underage drivers from engaging Full Self-Driving (FSD) or shifting into drive, acting as a secondary safety lock.

It could also be linked to Robotaxi readiness: the upcoming Cybercab will need robust occupant verification to ensure children cannot hail or ride unsupervised.

In consumer vehicles, it could enable tailored FSD behaviors—more conservative acceleration and braking for elderly drivers, for instance—or simply block unauthorized use by minors.

Beyond age checks, the cabin camera powers Tesla’s comprehensive driver monitoring system, introduced years earlier and continuously improved. It first gained prominence for detecting inattentiveness. When Autopilot or FSD is active, the camera tracks eye gaze, head position, and steering inputs in real time.

If the driver looks away too long or fails to keep their hands ready, the system issues escalating visual and audible alerts before disengaging assistance. This has dramatically reduced misuse cases and helped Tesla meet stricter regulatory demands for hands-on supervision.

The camera also monitors for drowsiness. Activated above roughly 40 mph (65 km/h) after at least 10 minutes of manual driving, the Driver Drowsiness Warning analyzes facial cues—frequency of yawns and blinks—alongside driving patterns like lane drifting or erratic steering.

When fatigue is detected, a clear on-screen message and chime prompt the driver to pull over and rest, or even to activate Full Self-Driving. Tesla explicitly states this feature enhances active safety without relying on facial recognition for identity.

These layered capabilities create a robust safety net. Inattentiveness detection alone has curbed distracted driving during assisted operation. Drowsiness alerts address a leading cause of highway crashes by intervening before impairment escalates.

Adding age verification extends this protection: it could flag inexperienced young drivers for extra caution or restrict high-autonomy features, while preparing vehicles for a future where robotaxis must safely manage passengers of all ages.

With privacy safeguards intact and processing done locally, Tesla’s cabin camera continues evolving from a simple attention monitor into a sophisticated guardian—advancing safer roads today and autonomous mobility tomorrow.

Continue Reading