Connect with us
tesla-fsd-beta-price-15k-10.69-wide-release tesla-fsd-beta-price-15k-10.69-wide-release

News

Tesla FSD Beta 10.69.2.2 extending to 160k owners in US and Canada: Elon Musk

Credit: Whole Mars Catalog

Published

on

It appears that after several iterations and adjustments, FSD Beta 10.69 is ready to roll out to the greater FSD Beta program. Elon Musk mentioned the update on Twitter, with the CEO stating that v10.69.2.2. should extend to 160,000 owners in the United States and Canada. 

Similar to his other announcements about the FSD Beta program, Musk’s comments were posted on Twitter. “FSD Beta 10.69.2.1 looks good, extending to 160k owners in US & Canada,” Musk wrote before correcting himself and clarifying that he was talking about FSD Beta 10.69.2.2, not v10.69.2.1. 

While Elon Musk has a known tendency to be extremely optimistic about FSD Beta-related statements, his comments about v10.69.2.2 do reflect observations from some of the program’s longtime members. Veteran FSD Beta tester @WholeMarsBlog, who does not shy away from criticizing the system if it does not work well, noted that his takeovers with v10.69.2.2 have been marginal. Fellow FSD Beta tester @GailAlfarATX reported similar observations. 

Tesla definitely seems to be pushing to release FSD to its fleet. Recent comments from Tesla’s Senior Director of Investor Relations Martin Viecha during an invite-only Goldman Sachs tech conference have hinted that the electric vehicle maker is on track to release “supervised” FSD around the end of the year. That’s around the same time as Elon Musk’s estimate for FSD’s wide release. 

It should be noted, of course, that even if Tesla manages to release “supervised” FSD to consumers by the end of the year, the version of the advanced driver-assist system would still require drivers to pay attention to the road and follow proper driving practices. With a feature-complete “supervised” FSD, however, Teslas would be able to navigate on their own regardless of whether they are in the highway or in inner-city streets. And that, ultimately, is a feature that will be extremely hard to beat. 

Advertisement
-->

Following are the release notes of FSD Beta v10.69.2.2, as retrieved by NotaTeslaApp

– Added a new “deep lane guidance” module to the Vector Lanes neural network which fuses features extracted from the video streams with coarse map data, i.e. lane counts and lane connectivities. This architecture achieves a 44% lower error rate on lane topology compared to the previous model, enabling smoother control before lanes and their connectivities becomes visually apparent. This provides a way to make every Autopilot drive as good as someone driving their own commute, yet in a sufficiently general way that adapts for road changes.

– Improved overall driving smoothness, without sacrificing latency, through better modeling of system and actuation latency in trajectory planning. Trajectory planner now independently accounts for latency from steering commands to actual steering actuation, as well as acceleration and brake commands to actuation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This allows better downstream controller tracking and smoothness while also allowing a more accurate response during harsh maneuvers.

– Improved unprotected left turns with more appropriate speed profile when approaching and exiting median crossover regions, in the presence of high speed cross traffic (“Chuck Cook style” unprotected left turns). This was done by allowing optimisable initial jerk, to mimic the harsh pedal press by a human, when required to go in front of high speed objects. Also improved lateral profile approaching such safety regions to allow for better pose that aligns well for exiting the region. Finally, improved interaction with objects that are entering or waiting inside the median crossover region with better modeling of their future intent.

– Added control for arbitrary low-speed moving volumes from Occupancy Network. This also enables finer control for more precise object shapes that cannot be easily represented by a cuboid primitive. This required predicting velocity at every 3D voxel. We may now control for slow-moving UFOs.

Advertisement
-->

– Upgraded Occupancy Network to use video instead of images from single time step. This temporal context allows the network to be robust to temporary occlusions and enables prediction of occupancy flow. Also, improved ground truth with semantics-driven outlier rejection, hard example mining, and increasing the dataset size by 2.4x.

– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network compute is allocated O(objects) instead of O(space). This improved velocity estimates for far away crossing vehicles by 20%, while using one tenth of the compute.

– Increased smoothness for protected right turns by improving the association of traffic lights with slip lanes vs yield signs with slip lanes. This reduces false slowdowns when there are no relevant objects present and also improves yielding position when they are present.

– Reduced false slowdowns near crosswalks. This was done with improved understanding of pedestrian and bicyclist intent based on their motion.

– Improved geometry error of ego-relevant lanes by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the per-camera feature extractors, video modules, internals of the autoregressive decoder, and by adding a hard attention mechanism which greatly improved the fine position of lanes.

Advertisement
-->

– Made speed profile more comfortable when creeping for visibility, to allow for smoother stops when protecting for potentially occluded objects.

– Improved recall of animals by 34% by doubling the size of the auto-labeled training set.

– Enabled creeping for visibility at any intersection where objects might cross ego’s path, regardless of presence of traffic controls.

– Improved accuracy of stopping position in critical scenarios with crossing objects, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.

– Increased recall of forking lanes by 36% by having topological tokens participate in the attention operations of the autoregressive decoder and by increasing the loss applied to fork tokens during training.

Advertisement
-->

– Improved velocity error for pedestrians and bicyclists by 17%, especially when ego is making a turn, by improving the onboard trajectory estimation used as input to the neural network.

– Improved recall of object detection, eliminating 26% of missing detections for far away crossing vehicles by tuning the loss function used during training and improving label quality.

– Improved object future path prediction in scenarios with high yaw rate by incorporating yaw rate and lateral motion into the likelihood estimation. This helps with objects turning into or away from ego’s lane, especially in intersections or cut-in scenarios.

– Improved speed when entering highway by better handling of upcoming map speed changes, which increases the confidence of merging onto the highway.

– Reduced latency when starting from a stop by accounting for lead vehicle jerk.

Advertisement
-->

– Enabled faster identification of red light runners by evaluating their current kinematic state against their expected braking profile.

Press the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s external cameras will share a short VIN-associated Autopilot Snapshot with the Tesla engineering team to help make improvements to FSD. You will not be able to view the clip.

Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

News

Tesla aims to combat common Full Self-Driving problem with new patent

Tesla writes in the patent that its autonomous and semi-autonomous vehicles are heavily reliant on camera systems to navigate and interact with their environment.

Published

on

Credit: @samsheffer | x

Tesla is aiming to combat a common Full Self-Driving problem with a new patent.

One issue with Tesla’s vision-based approach is that sunlight glare can become a troublesome element of everyday travel. Full Self-Driving is certainly an amazing technology, but there are still things Tesla is aiming to figure out with its development.

Unfortunately, it is extremely difficult to get around this issue, and even humans need ways to combat it when they’re driving, as we commonly use sunglasses or sun visors to give us better visibility.

Cameras obviously do not have these ways to fight sunglare, but a new patent Tesla recently had published aims to fight this through a “glare shield.”

Tesla writes in the patent that its autonomous and semi-autonomous vehicles are heavily reliant on camera systems to navigate and interact with their environment.

The ability to see surroundings is crucial for accurate performance, and glare is one element of interference that has yet to be confronted.

Tesla described the patent, which will utilize “a textured surface composed of an array of micro-cones, or cone-shaped formations, which serve to scatter incident light in various directions, thereby reducing glare and improving camera vision.”

The patent was first spotted by Not a Tesla App.

The design of the micro-cones is the first element of the puzzle to fight the excess glare. The patent says they are “optimized in size, angle, and orientation to minimize Total Hemispherical Reflectance (THR) and reflection penalty, enhancing the camera’s ability to accurately interpret visual data.”

Additionally, there is an electromechanical system for dynamic orientation adjustment, which will allow the micro-cones to move based on the angle of external light sources.

This is not the only thing Tesla is mulling to resolve issues with sunlight glare, as it has also worked on two other ways to combat the problem. One thing the company has discussed is a direct photon count.

CEO Elon Musk said during the Q2 Earnings Call:

“We use an approach which is direct photon count. When you see a processed image, so the image that goes from the sort of photon counter — the silicon photon counter — that then goes through a digital signal processor or image signal processor, that’s normally what happens. And then the image that you see looks all washed out, because if you point the camera at the sun, the post-processing of the photon counting washes things out.”

Future Hardware iterations, like Hardware 5 and Hardware 6, could also integrate better solutions for the sunglare issue, such as neutral density filters or heated lenses, aiming to solve glare more effectively.

Continue Reading

Elon Musk

Delaware Supreme Court reinstates Elon Musk’s 2018 Tesla CEO pay package

The unanimous decision criticized the prior total rescission as “improper and inequitable,” arguing that it left Musk uncompensated for six years of transformative leadership at Tesla.

Published

on

Gage Skidmore, CC BY-SA 4.0 , via Wikimedia Commons

The Delaware Supreme Court has overturned a lower court ruling, reinstating Elon Musk’s 2018 compensation package originally valued at $56 billion but now worth approximately $139 billion due to Tesla’s soaring stock price. 

The unanimous decision criticized the prior total rescission as “improper and inequitable,” arguing that it left Musk uncompensated for six years of transformative leadership at Tesla. Musk quickly celebrated the outcome on X, stating that he felt “vindicated.” He also shared his gratitude to TSLA shareholders.

Delaware Supreme Court makes a decision

In a 49-page ruling Friday, the Delaware Supreme Court reversed Chancellor Kathaleen McCormick’s 2024 decision that voided the 2018 package over alleged board conflicts and inadequate shareholder disclosures. The high court acknowledged varying views on liability but agreed rescission was excessive, stating it “leaves Musk uncompensated for his time and efforts over a period of six years.”

The 2018 plan granted Musk options on about 304 million shares upon hitting aggressive milestones, all of which were achieved ahead of time. Shareholders overwhelmingly approved it initially in 2018 and ratified it once again in 2024 after the Delaware lower court struck it down. The case against Musk’s 2018 pay package was filed by plaintiff Richard Tornetta, who held just nine shares when the compensation plan was approved.

A hard-fought victory

As noted in a Reuters report, Tesla’s win avoids a potential $26 billion earnings hit from replacing the award at current prices. Tesla, now Texas-incorporated, had hedged with interim plans, including a November 2025 shareholder-approved package potentially worth $878 billion tied to Robotaxi and Optimus goals and other extremely aggressive operational milestones.

Advertisement
-->

The saga surrounding Elon Musk’s 2018 pay package ultimately damaged Delaware’s corporate appeal, prompting a number of high-profile firms, such as Dropbox, Roblox, Trade Desk, and Coinbase, to follow Tesla’s exodus out of the state. What added more fuel to the issue was the fact that Tornetta’s legal team, following the lower court’s 2024 decision, demanded a fee request of more than $5.1 billion worth of TSLA stock, which was equal to an hourly rate of over $200,000.

Delaware Supreme Court Elon Musk 2018 Pay Package by Simon Alvarez

Continue Reading

News

Tesla Cybercab tests are going on overdrive with production-ready units

Tesla is ramping its real-world tests of the Cybercab, with multiple sightings of the vehicle being reported across social media this week.

Published

on

Credit: @JT59052914/X

Tesla is ramping its real-world tests of the Cybercab, with multiple sightings of the autonomous two-seater being reported across social media this week. Based on videos of the vehicle that have been shared online, it appears that Cybercab tests are underway across multiple states.

Recent Cybercab sightings

Reports of Cybercab tests have ramped this week, with a vehicle that looked like a production-ready prototype being spotted at Apple’s Visitor Center in California. The vehicle in this sighting was interesting as it was equipped with a steering wheel. The vehicle also featured some changes to the design of its brake lights.

The Cybercab was also filmed testing at the Fremont factory’s test track, which also seemed to involve a vehicle that looked production-ready. This also seemed to be the case for a Cybercab that was spotted in Austin, Texas, which happened to be undergoing real-world tests. Overall, these sightings suggest that Cybercab testing is fully underway, and the vehicle is really moving towards production.

Production design all but finalized?

Recently, a near-production-ready Cybercab was showcased at Tesla’s Santana Row showroom in San Jose. The vehicle was equipped with frameless windows, dual windshield wipers, powered butterfly door struts, an extended front splitter, an updated lightbar, new wheel covers, and a license plate bracket. Interior updates include redesigned dash/door panels, refined seats with center cupholders, updated carpet, and what appeared to be improved legroom.

There seems to be a pretty good chance that the Cybercab’s design has been all but finalized, at least considering Elon Musk’s comments at the 2025 Annual Shareholder Meeting. During the event, Musk confirmed that the vehicle will enter production around April 2026, and its production targets will be quite ambitious. 

Advertisement
-->
Continue Reading