News
Stanford studies human impact when self-driving car returns control to driver
Researchers involved with the Stanford University Dynamic Design Lab have completed a study that examines how human drivers respond when an autonomous driving system returns control of a car to them. The Lab’s mission, according to its website, is to “study the design and control of motion, especially as it relates to cars and vehicle safety. Our research blends analytical approaches to vehicle dynamics and control together with experiments in a variety of test vehicles and a healthy appreciation for the talents and demands of human drivers.” The results of the study were published on December 6 in the first edition of the journal Science Robotics.
Holly Russell, lead author of study and former graduate student at the Dynamic Design Lab says, “Many people have been doing research on paying attention and situation awareness. That’s very important. But, in addition, there is this physical change and we need to acknowledge that people’s performance might not be at its peak if they haven’t actively been participating in the driving.”
The report emphasizes that the DDL’s autonomous driving program is its own proprietary system and is not intended to mimic any particular autonomous driving system currently available from any automobile manufacturer, such as Tesla’s Autopilot.
The study found that the period of time known as “the handoff” — when the computer returns control of a car to a human driver — can be an especially risky period, especially if the speed of the vehicle has changed since the last time the person had direct control of the car. The amount of steering input required to accurately control a vehicle varies according to speed. Greater input is needed at slower speeds while less movement of the wheel is required at higher speeds.
People learn over time how to steer accurately at all speeds based on experience. But when some time elapses during which the driver is not directly involved in steering the car, the researchers found that drivers require a brief period of adjustment before they can accurately steer the car again. The greater the speed change while the computer is in control, the more erratic the human drivers were in their steering inputs upon resuming control.
“Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance,” said Lene Harbott, co-author of the research and a research associate in the Revs Program at Stanford.
Handoff From Computer to Human
The testing was done on a closed course. The participants drove for 15 seconds on a course that included a straightaway and a lane change. Then they took their hands off the wheel and the car took over, bringing them back to the start. After familiarizing themselves with the course four times, the researchers altered the steering ratio of the cars at the beginning of the next lap. The changes were designed to mimic the different steering inputs required at different speeds. The drivers then went around the course 10 more times.
Even though they were notified of the changes to the steering ratio, the drivers’ steering maneuvers differed significantly from their paths previous to the modifications during those ten laps. At the end, the steering ratios were returned to the original settings and the drivers drove 6 more laps around the course. Again the researchers found the drivers needed a period of adjustment to accurately steer the cars.
The DDL experiment is very similar to a classic neuroscience experiment that assesses motor adaptation. In one version, participants use a hand control to move a cursor on a screen to specific points. The way the cursor moves in response to their control is adjusted during the experiment and they, in turn, change their movements to make the cursor go where they want it to go.
Just as in the driving test, people who take part in the experiment have to adjust to changes in how the controller moves the cursor. They also must adjust a second time if the original response relationship is restored. People can performed this experiment themselves by adjusting the speed of the cursor on their personal computers.
“Even though there are really substantial differences between these classic experiments and the car trials, you can see this basic phenomena of adaptation and then after-effect of adaptation,” says IIana Nisky, another co-author of the study and a senior lecturer at Ben-Gurion University in Israel “What we learn in the laboratory studies of adaptation in neuroscience actually extends to real life.”
In neuroscience this is explained as a difference between explicit and implicit learning, Nisky explains. Even when a person is aware of a change, their implicit motor control is unaware of what that change means and can only figure out how to react through experience.
Federal and state regulators are currently working on guidelines that will apply to Level 5 autonomous cars. What the Stanford research shows is that until full autonomy becomes a reality, the “hand off” moment will represent a period of special risk, not because of any failing on the part of computers but rather because of limitations inherent in the brains of human drivers.
The best way to protect ourselves from that period of risk is to eliminate the “hand off” period entirely by ceding total control of driving to computers as soon as possible.
Elon Musk
Elon Musk reveals date of Tesla Full Self-Driving’s next massive release
Initially planned for a January or February release, v14.3 aims to add some reasoning and logic to the decisions that Full Self-Driving makes, which could improve a lot of things, including Navigation, which is a major complaint of many owners currently.
Tesla CEO Elon Musk revealed the date of Full Self-Driving’s next massive release: v14.3.
For months, Tesla owners with Hardware 4 have been utilizing Full Self-Driving v14.2 and subsequent releases. Currently, the most up-to-date FSD version is v14.2.2.5, which has definitely brought out mixed reviews. With releases, some things get better, and other things might regress slightly.
For the most part, things are better in terms of overall behavior.
However, many owners have been looking forward to the next release, which is v14.3, about which Musk has said many great things. Back in November, Musk said that v14.3 “is where the last big piece of the puzzle lands.”
He added:
“We’re gonna add a lot of reasoning and RL (reinforcement learning). To get to serious scale, Tesla will probably need to build a giant chip fab. To have a few hundred gigawatts of AI chips per year, I don’t see that capability coming online fast enough, so we will probably have to build a fab.”
Initially planned for a January or February release, v14.3 aims to add some reasoning and logic to the decisions that Full Self-Driving makes, which could improve a lot of things, including Navigation, which is a major complaint of many owners currently.
Tesla Full Self-Driving v14.2 is a considerable improvement from early versions of the suite, but we have written about the somewhat confusing updates that have come with recent versions.
Tesla Full Self-Driving v14.2.2.5 might be the most confusing release ever
They’ve been incredibly difficult to gauge in terms of progress because some things have gotten better, but there seems to be some real regression on a handful of things, especially with confidence and assertiveness.
Musk confirmed today on X that Tesla is already testing v14.3 internally right now. It will hit a wide release “in a few weeks,” so we should probably expect it by late April.
It’s in testing right now. Wide release in a few weeks.
— Elon Musk (@elonmusk) March 19, 2026
Overall, there are high hopes that v14.3 could be a true game changer for Tesla Full Self-Driving, as many believe it could be the version that Robotaxis in Austin, Texas, some of which are driverless and unsupervised, are running.
It could also include some major additions, including “Banish,” also referred to as “Reverse Summon,” which would go find a parking spot after dropping occupants off at their destination.
What Tesla will roll out, and when exactly it arrives, all remain to be seen, but fans have been ready for a new version as v14.2.2.5 has definitely run its course. We have had a lot of readers tell us their biggest request is to fix Navigation errors, which seem to be one of the most universal complaints among daily FSD users.
Cybertruck
Chattanooga Charge: Tesla and EV fans ready for the Southeast’s wildest Tesla party
From Cybertruck Convoys to Kid-Friendly Fun Zones: The Chattanooga Charge Has Something for Everyone
Hundreds of like-minded Tesla and EV enthusiasts are descending on Chattanooga Charge this weekend for the largest Tesla meet in the Southeast. Taking place on March 20–22, 2026 at the stunning Tennessee Riverpark.
If you were there last year, you’ll know that it’s the ultimate experience to see the wildest Teslas in action, see the best in EV tech, and arguably the most fun – finally put a name to the face and connect with those social media buddies IRL! Oh, and that epic night time Tesla light show is a once-in-a-lifetime experience that will transform the Riverpark into something out of a sci-fi film that’s remarkably unforgettable and must be seen in person.
This year’s event takes everything up a notch, with over 100 Cybertrucks expected to be on display, many sporting jaw-dropping modifications and custom wraps that push the boundaries of what these stainless steel beasts can look like.
Whether you’re a diehard Tesla fan, EV supporter, or just EV-mod-curious, the sheer spectacle is worth the drive.
The Chattanooga Charge doesn’t wait until Saturday morning to get started. The weekend technically kicks off Friday, March 20th, and the venue sets the tone immediately. Come share roadtrip stories over drinks at the W-XYZ Rooftop Bar on the top floor of the Aloft Chattanooga Hamilton Place Hotel, with sunset views over the city.
Come morning, nurse your hangover with a some good coffee, and convoy with hundreds of other Tesla and EV drivers through Chattanooga to the event for some morning meet and greets before the speaker panel starts and the food trucks fire up.
Tesla owner clubs travel from across the country to be here, not just to show off their vehicles,, but to connect, share, and celebrate a shared passion for the future of driving.
Sounds like a plan to me. See you there, guys. Don’t miss it. Get your tickets at ChattanoogaCharge.com and join the charge. 🔋⚡
Chattanooga Charge is a premier Tesla and EV gathering inspired by the X Takeover, known as one of the largest Tesla event gatherings. What began as a bold idea from the team at DIY Wraps/TESBROS, hosted in their hometown of Chattanooga, Tennessee, the event quickly became a movement across social media. The first annual Chattanooga Charge united over 16 Tesla clubs from 16 states, proof that the EV community was hungry for something big in the South. Year after year, the event has grown in scale, ambition, and heart.
News
Tesla Full Self-Driving gets latest bit of scrutiny from NHTSA
The analysis impacts roughly 3.2 million vehicles across the company’s entire lineup, and aims to identify how the suite’s degradation detection systems work and how effective they are when the cars encounter difficult visibility conditions.
The National Highway Traffic Safety Administration (NHTSA) has elevated its probe into Tesla’s Full Self-Driving (Supervised) suite to an Engineering Analysis.
The analysis impacts roughly 3.2 million vehicles across the company’s entire lineup, and aims to identify how the suite’s degradation detection systems work and how effective they are when the cars encounter difficult visibility conditions.
The step up into an Engineering Analysis is often required before the NHTSA will tell an automaker to issue a recall. However, this is not a guarantee that a recall will be issued.
🚨 The NHTSA said it was upgrading a probe into Tesla’s Full Self-Driving (Supervised) platform to an “engineering analysis”
It will examine 3.2 million vehicles and aims to determine its effectiveness in evaluating degraded road conditions pic.twitter.com/2dkrv1mR8o
— TESLARATI (@Teslarati) March 19, 2026
The NTHSA wants to examine Tesla FSD’s ability to assess road conditions that have reduced visibility, as well as detect degradation to alert the driver with sufficient time to respond.
The Office of Defects Investigation (ODI) will evaluate the performance of FSD in degraded roadway conditions and the updates or modifications Tesla makes to the degradation detection system, including the timing, purpose, and capabilities of the updates.
Tesla routinely ships software updates to improve the capabilities of the FSD suite, so it will be interesting to see if various versions of FSD are tested. Interestingly, you can find many examples from real-world users of FSD handling snow-covered roads, heavy rain, and single-lane backroads.
However, there are incidents that the NHTSA has used to determine the need for this probe, at least for now. The agency said:
“Available incident data raise concerns that Tesla’s degradation detection system, both as originally deployed and later updated, fails to detect and/or warn the driver appropriately under degraded visibility conditions such as glare and airborne obscurants. In the crashes that ODI has reviewed, the system did not detect common roadway conditions that impaired camera visibility and/or provide alerts when camera performance had deteriorated until immediately before the crash occurred.”
It continues to say in its report that a review of Tesla’s responses revealed additional crashes that occurred in similar environments showed FSD “did not detect a degraded state, and/or it did not present the driver with an alert with adequate time for the driver to react. In each of these crashes, FSD also lost track of or never detected a lead vehicle in its path.”
The next steps of the NHTSA Engineering Analysis require the agency to gather further information on Tesla’s attempts to upgrade the degradation detection system. It will also analyze six recent potentially related incidents.
The investigation is listed as EA26002.