Connect with us

News

Stanford studies human impact when self-driving car returns control to driver

Published

on

Tesla Autopilot in 'Shadow Mode' will pit human vs computer

Researchers involved with the Stanford University Dynamic Design Lab have completed a study that examines how human drivers respond when an autonomous driving system returns control of a car to them. The Lab’s mission, according to its website, is to “study the design and control of motion, especially as it relates to cars and vehicle safety. Our research blends analytical approaches to vehicle dynamics and control together with experiments in a variety of test vehicles and a healthy appreciation for the talents and demands of human drivers.” The results of the study were published on December 6 in the first edition of the journal Science Robotics.

Holly Russell, lead author of study and former graduate student at the Dynamic Design Lab says, “Many people have been doing research on paying attention and situation awareness. That’s very important. But, in addition, there is this physical change and we need to acknowledge that people’s performance might not be at its peak if they haven’t actively been participating in the driving.”

The report emphasizes that the DDL’s autonomous driving program is its own proprietary system and is not intended to mimic any particular autonomous driving system currently available from any automobile manufacturer, such as Tesla’s Autopilot.

The study found that the period of time known as “the handoff” — when the computer returns control of a car to a human driver — can be an especially risky period, especially if the speed of the vehicle has changed since the last time the person had direct control of the car. The amount of steering input required to accurately control a vehicle varies according to speed. Greater input is needed at slower speeds while less movement of the wheel is required at higher speeds.

People learn over time how to steer accurately at all speeds based on experience. But when some time elapses during which the driver is not directly involved in steering the car, the researchers found that drivers require a brief period of adjustment before they can accurately steer the car again. The greater the speed change while the computer is in control, the more erratic the human drivers were in their steering inputs upon resuming control.

Advertisement
-->

“Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance,” said Lene Harbott, co-author of the research and a research associate in the Revs Program at Stanford.

Handoff From Computer to Human

The testing was done on a closed course. The participants drove for 15 seconds on a course that included a straightaway and a lane change. Then they took their hands off the wheel and the car took over, bringing them back to the start. After familiarizing themselves with the course four times, the researchers altered the steering ratio of the cars at the beginning of the next lap. The changes were designed to mimic the different steering inputs required at different speeds. The drivers then went around the course 10 more times.

Even though they were notified of the changes to the steering ratio, the drivers’ steering maneuvers differed significantly from their paths previous to the modifications during those ten laps. At the end, the steering ratios were returned to the original settings and the drivers drove 6 more laps around the course. Again the researchers found the drivers needed a period of adjustment to accurately steer the cars.

The DDL experiment is very similar to a classic neuroscience experiment that assesses motor adaptation. In one version, participants use a hand control to move a cursor on a screen to specific points. The way the cursor moves in response to their control is adjusted during the experiment and they, in turn, change their movements to make the cursor go where they want it to go.

Just as in the driving test, people who take part in the experiment have to adjust to changes in how the controller moves the cursor. They also must adjust a second time if the original response relationship is restored. People can performed this experiment themselves by adjusting the speed of the cursor on their personal computers.

Advertisement
-->

“Even though there are really substantial differences between these classic experiments and the car trials, you can see this basic phenomena of adaptation and then after-effect of adaptation,” says IIana Nisky, another co-author of the study and a senior lecturer at Ben-Gurion University in Israel “What we learn in the laboratory studies of adaptation in neuroscience actually extends to real life.”

In neuroscience this is explained as a difference between explicit and implicit learning, Nisky explains. Even when a person is aware of a change, their implicit motor control is unaware of what that change means and can only figure out how to react through experience.

Federal and state regulators are currently working on guidelines that will apply to Level 5 autonomous cars. What the Stanford research shows is that until full autonomy becomes a reality, the “hand off” moment will represent a period of special risk, not because of any failing on the part of computers but rather because of limitations inherent in the brains of human drivers.

The best way to protect ourselves from that period of risk is to eliminate the “hand off” period entirely by ceding total control of driving to computers as soon as possible.

Advertisement
-->

"I write about technology and the coming zero emissions revolution."

Advertisement
Comments

Elon Musk

Elon Musk and Tesla AI Director share insights after empty driver seat Robotaxi rides

The executives’ unoccupied tests hint at the rapid progress of Tesla’s unsupervised Robotaxi efforts.

Published

on

Ashok Elluswamy

Tesla CEO Elon Musk and AI Director Ashok Elluswamy celebrated Christmas Eve by sharing personal experiences with Robotaxi vehicles that had no safety monitor or occupant in the driver’s seat. Musk described the system’s “perfect driving” around Austin, while Elluswamy posted video from the back seat, calling it “an amazing experience.”

The executives’ unoccupied tests hint at the rapid progress of Tesla’s unsupervised Robotaxi efforts.

Elon and Ashok’s firsthand Robotaxi insights

Prior to Musk and the Tesla AI Director’s posts, sightings of unmanned Teslas navigating public roads were widely shared on social media. One such vehicle was spotted in Austin, Texas, which Elon Musk acknowleged by stating that “Testing is underway with no occupants in the car.” 

Based on his Christmas Eve post, Musk seemed to have tested an unmanned Tesla himself. “A Tesla with no safety monitor in the car and me sitting in the passenger seat took me all around Austin on Sunday with perfect driving,” Musk wrote in his post.

Elluswamy responded with a 2-minute video showing himself in the rear of an unmanned Tesla. The video featured the vehicle’s empty front seats, as well as its smooth handling through real-world traffic. He captioned his video with the words, “It’s an amazing experience!”

Advertisement
-->

Towards Unsupervised operations

During an xAI Hackathon earlier this month, Elon Musk mentioned that Tesla owed be removing Safety Monitors from its Robotaxis in Austin in just three weeks. “Unsupervised is pretty much solved at this point. So there will be Tesla Robotaxis operating in Austin with no one in them. Not even anyone in the passenger seat in about three weeks,” he said. Musk echoed similar estimates at the 2025 Annual Shareholder Meeting and the Q3 2025 earnings call.

Considering the insights that were posted Musk and Elluswamy, it does appear that Tesla is working hard towards operating its Robotaxis with no safety monitors. This is quite impressive considering that the service was launched just earlier this year.

Continue Reading

Elon Musk

Starlink passes 9 million active customers just weeks after hitting 8 million

The milestone highlights the accelerating growth of Starlink, which has now been adding over 20,000 new users per day.

Published

on

Credit: Starlink/X

SpaceX’s Starlink satellite internet service has continued its rapid global expansion, surpassing 9 million active customers just weeks after crossing the 8 million mark. 

The milestone highlights the accelerating growth of Starlink, which has now been adding over 20,000 new users per day.

9 million customers

In a post on X, SpaceX stated that Starlink now serves over 9 million active users across 155 countries, territories, and markets. The company reached 8 million customers in early November, meaning it added roughly 1 million subscribers in under seven weeks, or about 21,275 new users on average per day. 

“Starlink is connecting more than 9M active customers with high-speed internet across 155 countries, territories, and many other markets,” Starlink wrote in a post on its official X account. SpaceX President Gwynne Shotwell also celebrated the milestone on X. “A huge thank you to all of our customers and congrats to the Starlink team for such an incredible product,” she wrote. 

That growth rate reflects both rising demand for broadband in underserved regions and Starlink’s expanding satellite constellation, which now includes more than 9,000 low-Earth-orbit satellites designed to deliver high-speed, low-latency internet worldwide.

Advertisement
-->

Starlink’s momentum

Starlink’s momentum has been building up. SpaceX reported 4.6 million Starlink customers in December 2024, followed by 7 million by August 2025, and 8 million customers in November. Independent data also suggests Starlink usage is rising sharply, with Cloudflare reporting that global web traffic from Starlink users more than doubled in 2025, as noted in an Insider report.

Starlink’s momentum is increasingly tied to SpaceX’s broader financial outlook. Elon Musk has said the satellite network is “by far” the company’s largest revenue driver, and reports suggest SpaceX may be positioning itself for an initial public offering as soon as next year, with valuations estimated as high as $1.5 trillion. Musk has also suggested in the past that Starlink could have its own IPO in the future. 

Continue Reading

News

NVIDIA Director of Robotics: Tesla FSD v14 is the first AI to pass the “Physical Turing Test”

After testing FSD v14, Fan stated that his experience with FSD felt magical at first, but it soon started to feel like a routine.

Published

on

Credit: Grok Imagine

NVIDIA Director of Robotics Jim Fan has praised Tesla’s Full Self-Driving (Supervised) v14 as the first AI to pass what he described as a “Physical Turing Test.”

After testing FSD v14, Fan stated that his experience with FSD felt magical at first, but it soon started to feel like a routine. And just like smartphones today, removing it now would “actively hurt.”

Jim Fan’s hands-on FSD v14 impressions

Fan, a leading researcher in embodied AI who is currently solving Physical AI at NVIDIA and spearheading the company’s Project GR00T initiative, noted that he actually was late to the Tesla game. He was, however, one of the first to try out FSD v14

“I was very late to own a Tesla but among the earliest to try out FSD v14. It’s perhaps the first time I experience an AI that passes the Physical Turing Test: after a long day at work, you press a button, lay back, and couldn’t tell if a neural net or a human drove you home,” Fan wrote in a post on X. 

Fan added: “Despite knowing exactly how robot learning works, I still find it magical watching the steering wheel turn by itself. First it feels surreal, next it becomes routine. Then, like the smartphone, taking it away actively hurts. This is how humanity gets rewired and glued to god-like technologies.”

Advertisement
-->

The Physical Turing Test

The original Turing Test was conceived by Alan Turing in 1950, and it was aimed at determining if a machine could exhibit behavior that is equivalent to or indistinguishable from a human. By focusing on text-based conversations, the original Turing Test set a high bar for natural language processing and machine learning. 

This test has been passed by today’s large language models. However, the capability to converse in a humanlike manner is a completely different challenge from performing real-world problem-solving or physical interactions. Thus, Fan introduced the Physical Turing Test, which challenges AI systems to demonstrate intelligence through physical actions.

Based on Fan’s comments, Tesla has demonstrated these intelligent physical actions with FSD v14. Elon Musk agreed with the NVIDIA executive, stating in a post on X that with FSD v14, “you can sense the sentience maturing.” Musk also praised Tesla AI, calling it the best “real-world AI” today.

Continue Reading