Connect with us

News

Stanford studies human impact when self-driving car returns control to driver

Published

on

Tesla Autopilot in 'Shadow Mode' will pit human vs computer

Researchers involved with the Stanford University Dynamic Design Lab have completed a study that examines how human drivers respond when an autonomous driving system returns control of a car to them. The Lab’s mission, according to its website, is to “study the design and control of motion, especially as it relates to cars and vehicle safety. Our research blends analytical approaches to vehicle dynamics and control together with experiments in a variety of test vehicles and a healthy appreciation for the talents and demands of human drivers.” The results of the study were published on December 6 in the first edition of the journal Science Robotics.

Holly Russell, lead author of study and former graduate student at the Dynamic Design Lab says, “Many people have been doing research on paying attention and situation awareness. That’s very important. But, in addition, there is this physical change and we need to acknowledge that people’s performance might not be at its peak if they haven’t actively been participating in the driving.”

The report emphasizes that the DDL’s autonomous driving program is its own proprietary system and is not intended to mimic any particular autonomous driving system currently available from any automobile manufacturer, such as Tesla’s Autopilot.

The study found that the period of time known as “the handoff” — when the computer returns control of a car to a human driver — can be an especially risky period, especially if the speed of the vehicle has changed since the last time the person had direct control of the car. The amount of steering input required to accurately control a vehicle varies according to speed. Greater input is needed at slower speeds while less movement of the wheel is required at higher speeds.

Advertisement

People learn over time how to steer accurately at all speeds based on experience. But when some time elapses during which the driver is not directly involved in steering the car, the researchers found that drivers require a brief period of adjustment before they can accurately steer the car again. The greater the speed change while the computer is in control, the more erratic the human drivers were in their steering inputs upon resuming control.

“Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance,” said Lene Harbott, co-author of the research and a research associate in the Revs Program at Stanford.

Handoff From Computer to Human

The testing was done on a closed course. The participants drove for 15 seconds on a course that included a straightaway and a lane change. Then they took their hands off the wheel and the car took over, bringing them back to the start. After familiarizing themselves with the course four times, the researchers altered the steering ratio of the cars at the beginning of the next lap. The changes were designed to mimic the different steering inputs required at different speeds. The drivers then went around the course 10 more times.

Even though they were notified of the changes to the steering ratio, the drivers’ steering maneuvers differed significantly from their paths previous to the modifications during those ten laps. At the end, the steering ratios were returned to the original settings and the drivers drove 6 more laps around the course. Again the researchers found the drivers needed a period of adjustment to accurately steer the cars.

Advertisement

The DDL experiment is very similar to a classic neuroscience experiment that assesses motor adaptation. In one version, participants use a hand control to move a cursor on a screen to specific points. The way the cursor moves in response to their control is adjusted during the experiment and they, in turn, change their movements to make the cursor go where they want it to go.

Just as in the driving test, people who take part in the experiment have to adjust to changes in how the controller moves the cursor. They also must adjust a second time if the original response relationship is restored. People can performed this experiment themselves by adjusting the speed of the cursor on their personal computers.

“Even though there are really substantial differences between these classic experiments and the car trials, you can see this basic phenomena of adaptation and then after-effect of adaptation,” says IIana Nisky, another co-author of the study and a senior lecturer at Ben-Gurion University in Israel “What we learn in the laboratory studies of adaptation in neuroscience actually extends to real life.”

In neuroscience this is explained as a difference between explicit and implicit learning, Nisky explains. Even when a person is aware of a change, their implicit motor control is unaware of what that change means and can only figure out how to react through experience.

Advertisement

Federal and state regulators are currently working on guidelines that will apply to Level 5 autonomous cars. What the Stanford research shows is that until full autonomy becomes a reality, the “hand off” moment will represent a period of special risk, not because of any failing on the part of computers but rather because of limitations inherent in the brains of human drivers.

The best way to protect ourselves from that period of risk is to eliminate the “hand off” period entirely by ceding total control of driving to computers as soon as possible.

"I write about technology and the coming zero emissions revolution."

Advertisement
Comments

Elon Musk

What is Digital Optimus? The new Tesla and xAI project explained

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Published

on

Credit: Grok

Tesla and xAI announced their groundbreaking joint project, Digital Optimus, also nicknamed “Macrohard” in a humorous jab at Microsoft, earlier this week.

This software-based AI agent is designed to automate complex office workflows by observing and replicating human interactions with computers. As the first major outcome of Tesla’s $2 billion investment in xAI, it represents a powerful fusion of hardware efficiency and advanced reasoning.

Tesla announces massive investment into xAI

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Advertisement

Tesla’s specialized AI acts as “System 1”—the fast, instinctive executor—processing the past five seconds of real-time computer screen video along with keyboard and mouse actions to perform immediate tasks.

Advertisement

xAI’s Grok model serves as “System 2,” the strategic “master conductor” or navigator, providing high-level reasoning, world understanding, and directional oversight, much like an advanced turn-by-turn navigation system.

When combined, the two can create a powerful AI-based assistant that can complete everything from accounting work to HR tasks.

Will Tesla join the fold? Predicting a triple merger with SpaceX and xAI

The system runs primarily on Tesla’s low-cost AI4 inference chip, minimizing expensive Nvidia resources from xAI for competitive, real-time performance.

Advertisement

Elon Musk described it as “the only real-time smart AI system” capable, in principle, of emulating the functions of entire companies, handling everything from accounting and HR to repetitive digital operations.

Timelines point to swift deployment. Announced just days ago, Musk expects Digital Optimus to be ready for user experience within about six months, targeting rollout around September 2026.

It will integrate into all AI4-equipped Tesla vehicles, enabling parked cars to handle office work during downtime. Millions of dedicated units are also planned for deployment at Supercharger stations, tapping into roughly 7 gigawatts of available power.

Digital Optimus directly supports Tesla’s broader autonomy strategy. It leverages the same end-to-end neural networks, computer vision, and real-time decision-making tech that power Full Self-Driving (FSD) software and the physical Optimus humanoid robot.

By repurposing idle vehicle compute and extending AI4 hardware beyond driving, the project scales Tesla’s autonomy ecosystem from roads to digital workspaces.

Advertisement

As a virtual counterpart to physical Optimus, it divides labor: software agents manage screen-based tasks while humanoid robots tackle physical ones, accelerating Tesla’s vision of general-purpose AI for productivity, Robotaxi fleets, and beyond.

In essence, Digital Optimus bridges Tesla’s vehicle and robotics autonomy with enterprise-scale AI, promising massive efficiency gains. No other company currently matches its real-time capabilities on such accessible hardware.

It really could be one of the most crucial developments Tesla and xAI begin to integrate, as it could revolutionize how people work and travel.

Advertisement
Continue Reading

News

Tesla adds awesome new driving feature to Model Y

Tesla is rolling out a new “Comfort Braking” feature with Software Update 2026.8. The feature is exclusive to the new Model Y, and is currently unavailable for any other vehicle in the Tesla lineup.

Published

on

Credit: Tesla

Tesla is adding an awesome new driving feature to Model Y vehicles, effective on Juniper-updated models considered model year 2026 or newer.

Tesla is rolling out a new “Comfort Braking” feature with Software Update 2026.8. The feature is exclusive to the new Model Y, and is currently unavailable for any other vehicle in the Tesla lineup.

Tesla writes in the release notes for the feature:

“Your Tesla now provides a smoother feel as you come to a complete stop during routine braking.”

Advertisement

Interestingly, we’re not too sure what catalyzed Tesla to try to improve braking smoothness, because it hasn’t seemed overly abrupt or rough from my perspective. Although the brake pedal in my Model Y is rarely used due to Regenerative Braking, it seems Tesla wanted to try to make the ride comfort even smoother for owners.

Advertisement

There is always room for improvement, though, and it seems that there is a way to make braking smoother for passengers while the vehicle is coming to a stop.

This is far from the first time Tesla has attempted to improve its ride comfort through Over-the-Air updates, as it has rolled out updates to improve regenerative braking performance, handling while using Full Self-Driving, improvements to Steer-by-Wire to Cybertruck, and even recent releases that have combatted Active Road Noise.

Tesla set to activate long-awaited Cybertruck feature

Tesla holds a unique ability to change the functionality of its vehicles through software updates, which have come in handy for many things, including remedying certain recalls and shipping new features to the Full Self-Driving suite.

Advertisement

Tesla seems to have the most seamless OTA processes, as many automakers have the ability to ship improvements through a simple software update.

We’re really excited to test the update, so when we get an opportunity to try out Comfort Braking when it makes it to our Model Y.

Continue Reading

News

Tesla finally brings a Robotaxi update that Android users will love

The breakdown of the software version shows that Tesla is actively developing an Android-compatible version of the Robotaxi app, and the company is developing Live Activities for Android.

Published

on

Credit: Grok

Tesla is finally bringing an update of its Robotaxi platform that Android users will love — mostly because it seems like they will finally be able to use the ride-hailing platform that the company has had active since last June.

Based on a decompile of software version 26.2.0 of the Robotaxi app, Tesla looks to be ready to roll out access to Android users.

According to the breakdown, performed by Tesla App Updates, the company is preparing to roll out an Android version of the app as it is developing several features for that operating system.

The breakdown of the software version shows that Tesla is actively developing an Android-compatible version of the Robotaxi app, and the company is developing Live Activities for Android:

“Strings like notification_channel_robotaxid_trip_name and android_native_alicorn_eta_text show exactly how Tesla plans to replicate the iOS Live Activities experience. Instead of standard push alerts, Android users are getting a persistent, dynamically updating notification channel.”

Advertisement

This is a big step forward for several reasons. From a face-value perspective, Tesla is finally ready to offer Robotaxi to Android users.

The company has routinely prioritized Apple releases because there is a higher concentration of iPhone users in its ownership base. Additionally, the development process for Apple is simply less laborious.

Tesla is working to increase Android capabilities in its vehicles

Secondly, the Robotaxi rollout has been a typical example of “slowly then all at once.”

Advertisement

Tesla initially released Robotaxi access to a handful of media members and influencers. Eventually, it was expanded to more users, so that anyone using an iOS device could download the app and hail a semi-autonomous ride in Austin or the Bay Area.

Opening up the user base to Android users may show that Tesla is preparing to allow even more users to utilize its Robotaxi platform, and although it seems to be a few months away from only offering fully autonomous rides to anyone with app access, the expansion of the user base to an entirely different user base definitely seems like its a step in the right direction.

Continue Reading