News
Stanford studies human impact when self-driving car returns control to driver
Researchers involved with the Stanford University Dynamic Design Lab have completed a study that examines how human drivers respond when an autonomous driving system returns control of a car to them. The Lab’s mission, according to its website, is to “study the design and control of motion, especially as it relates to cars and vehicle safety. Our research blends analytical approaches to vehicle dynamics and control together with experiments in a variety of test vehicles and a healthy appreciation for the talents and demands of human drivers.” The results of the study were published on December 6 in the first edition of the journal Science Robotics.
Holly Russell, lead author of study and former graduate student at the Dynamic Design Lab says, “Many people have been doing research on paying attention and situation awareness. That’s very important. But, in addition, there is this physical change and we need to acknowledge that people’s performance might not be at its peak if they haven’t actively been participating in the driving.”
The report emphasizes that the DDL’s autonomous driving program is its own proprietary system and is not intended to mimic any particular autonomous driving system currently available from any automobile manufacturer, such as Tesla’s Autopilot.
The study found that the period of time known as “the handoff” — when the computer returns control of a car to a human driver — can be an especially risky period, especially if the speed of the vehicle has changed since the last time the person had direct control of the car. The amount of steering input required to accurately control a vehicle varies according to speed. Greater input is needed at slower speeds while less movement of the wheel is required at higher speeds.
People learn over time how to steer accurately at all speeds based on experience. But when some time elapses during which the driver is not directly involved in steering the car, the researchers found that drivers require a brief period of adjustment before they can accurately steer the car again. The greater the speed change while the computer is in control, the more erratic the human drivers were in their steering inputs upon resuming control.
“Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance,” said Lene Harbott, co-author of the research and a research associate in the Revs Program at Stanford.
Handoff From Computer to Human
The testing was done on a closed course. The participants drove for 15 seconds on a course that included a straightaway and a lane change. Then they took their hands off the wheel and the car took over, bringing them back to the start. After familiarizing themselves with the course four times, the researchers altered the steering ratio of the cars at the beginning of the next lap. The changes were designed to mimic the different steering inputs required at different speeds. The drivers then went around the course 10 more times.
Even though they were notified of the changes to the steering ratio, the drivers’ steering maneuvers differed significantly from their paths previous to the modifications during those ten laps. At the end, the steering ratios were returned to the original settings and the drivers drove 6 more laps around the course. Again the researchers found the drivers needed a period of adjustment to accurately steer the cars.
The DDL experiment is very similar to a classic neuroscience experiment that assesses motor adaptation. In one version, participants use a hand control to move a cursor on a screen to specific points. The way the cursor moves in response to their control is adjusted during the experiment and they, in turn, change their movements to make the cursor go where they want it to go.
Just as in the driving test, people who take part in the experiment have to adjust to changes in how the controller moves the cursor. They also must adjust a second time if the original response relationship is restored. People can performed this experiment themselves by adjusting the speed of the cursor on their personal computers.
“Even though there are really substantial differences between these classic experiments and the car trials, you can see this basic phenomena of adaptation and then after-effect of adaptation,” says IIana Nisky, another co-author of the study and a senior lecturer at Ben-Gurion University in Israel “What we learn in the laboratory studies of adaptation in neuroscience actually extends to real life.”
In neuroscience this is explained as a difference between explicit and implicit learning, Nisky explains. Even when a person is aware of a change, their implicit motor control is unaware of what that change means and can only figure out how to react through experience.
Federal and state regulators are currently working on guidelines that will apply to Level 5 autonomous cars. What the Stanford research shows is that until full autonomy becomes a reality, the “hand off” moment will represent a period of special risk, not because of any failing on the part of computers but rather because of limitations inherent in the brains of human drivers.
The best way to protect ourselves from that period of risk is to eliminate the “hand off” period entirely by ceding total control of driving to computers as soon as possible.
News
Tesla FSD (Supervised) fleet passes 8.4 billion cumulative miles
The figure appears on Tesla’s official safety page, which tracks performance data for FSD (Supervised) and other safety technologies.
Tesla’s Full Self-Driving (Supervised) system has now surpassed 8.4 billion cumulative miles.
The figure appears on Tesla’s official safety page, which tracks performance data for FSD (Supervised) and other safety technologies.
Tesla has long emphasized that large-scale real-world data is central to improving its neural network-based approach to autonomy. Each mile driven with FSD (Supervised) engaged contributes additional edge cases and scenario training for the system.

The milestone also brings Tesla closer to a benchmark previously outlined by CEO Elon Musk. Musk has stated that roughly 10 billion miles of training data may be needed to achieve safe unsupervised self-driving at scale, citing the “long tail” of rare but complex driving situations that must be learned through experience.
The growth curve of FSD Supervised’s cumulative miles over the past five years has been notable.
As noted in data shared by Tesla watcher Sawyer Merritt, annual FSD (Supervised) miles have increased from roughly 6 million in 2021 to 80 million in 2022, 670 million in 2023, 2.25 billion in 2024, and 4.25 billion in 2025. In just the first 50 days of 2026, Tesla owners logged another 1 billion miles.
At the current pace, the fleet is trending towards hitting about 10 billion FSD Supervised miles this year. The increase has been driven by Tesla’s growing vehicle fleet, periodic free trials, and expanding Robotaxi operations, among others.
With the fleet now past 8.4 billion cumulative miles, Tesla’s supervised system is approaching that threshold, even as regulatory approval for fully unsupervised deployment remains subject to further validation and oversight.
Elon Musk
Elon Musk fires back after Wikipedia co-founder claims neutrality and dubs Grokipedia “ridiculous”
Musk’s response to Wales’ comments, which were posted on social media platform X, was short and direct: “Famous last words.”
Elon Musk fired back at Wikipedia co-founder Jimmy Wales after the longtime online encyclopedia leader dismissed xAI’s new AI-powered alternative, Grokipedia, as a “ridiculous” idea that is bound to fail.
Musk’s response to Wales’ comments, which were posted on social media platform X, was short and direct: “Famous last words.”
Wales made the comments while answering questions about Wikipedia’s neutrality. According to Wales, Wikipedia prides itself on neutrality.
“One of our core values at Wikipedia is neutrality. A neutral point of view is non-negotiable. It’s in the community, unquestioned… The idea that we’ve become somehow ‘Wokepidea’ is just not true,” Wales said.
When asked about potential competition from Grokipedia, Wales downplayed the situation. “There is no competition. I don’t know if anyone uses Grokipedia. I think it is a ridiculous idea that will never work,” Wales wrote.
After Grokipedia went live, Larry Sanger, also a co-founder of Wikipedia, wrote on X that his initial impression of the AI-powered Wikipedia alternative was “very OK.”
“My initial impression, looking at my own article and poking around here and there, is that Grokipedia is very OK. The jury’s still out as to whether it’s actually better than Wikipedia. But at this point I would have to say ‘maybe!’” Sanger stated.
Musk responded to Sanger’s assessment by saying it was “accurate.” In a separate post, he added that even in its V0.1 form, Grokipedia was already better than Wikipedia.
During a past appearance on the Tucker Carlson Show, Sanger argued that Wikipedia has drifted from its original vision, citing concerns about how its “Reliable sources/Perennial sources” framework categorizes publications by perceived credibility. As per Sanger, Wikipedia’s “Reliable sources/Perennial sources” list leans heavily left, with conservative publications getting effectively blacklisted in favor of their more liberal counterparts.
As of writing, Grokipedia has reportedly surpassed 80% of English Wikipedia’s article count.
News
Tesla Sweden appeals after grid company refuses to restore existing Supercharger due to union strike
The charging site was previously functioning before it was temporarily disconnected in April last year for electrical safety reasons.
Tesla Sweden is seeking regulatory intervention after a Swedish power grid company refused to reconnect an already operational Supercharger station in Åre due to ongoing union sympathy actions.
The charging site was previously functioning before it was temporarily disconnected in April last year for electrical safety reasons. A temporary construction power cabinet supplying the station had fallen over, described by Tesla as occurring “under unclear circumstances.” The power was then cut at the request of Tesla’s installation contractor to allow safe repair work.
While the safety issue was resolved, the station has not been brought back online. Stefan Sedin, CEO of Jämtkraft elnät, told Dagens Arbete (DA) that power will not be restored to the existing Supercharger station as long as the electric vehicle maker’s union issues are ongoing.
“One of our installers noticed that the construction power had been backed up and was on the ground. We asked Tesla to fix the system, and their installation company in turn asked us to cut the power so that they could do the work safely.
“When everything was restored, the question arose: ‘Wait a minute, can we reconnect the station to the electricity grid? Or what does the notice actually say?’ We consulted with our employer organization, who were clear that as long as sympathy measures are in place, we cannot reconnect this facility,” Sedin said.
The union’s sympathy actions, which began in March 2024, apply to work involving “planning, preparation, new connections, grid expansion, service, maintenance and repairs” of Tesla’s charging infrastructure in Sweden.
Tesla Sweden has argued that reconnecting an existing facility is not equivalent to establishing a new grid connection. In a filing to the Swedish Energy Market Inspectorate, the company stated that reconnecting the installation “is therefore not covered by the sympathy measures and cannot therefore constitute a reason for not reconnecting the facility to the electricity grid.”
Sedin, for his part, noted that Tesla’s issue with the Supercharger is quite unique. And while Jämtkraft elnät itself has no issue with Tesla, its actions are based on the unions’ sympathy measures against the electric vehicle maker.
“This is absolutely the first time that I have been involved in matters relating to union conflicts or sympathy measures. That is why we have relied entirely on the assessment of our employer organization. This is not something that we have made any decisions about ourselves at all.
“It is not that Jämtkraft elnät has a conflict with Tesla, but our actions are based on these sympathy measures. Should it turn out that we have made an incorrect assessment, we will correct ourselves. It is no more difficult than that for us,” the executive said.