Connect with us

News

Stanford studies human impact when self-driving car returns control to driver

Published

on

Tesla Autopilot in 'Shadow Mode' will pit human vs computer

Researchers involved with the Stanford University Dynamic Design Lab have completed a study that examines how human drivers respond when an autonomous driving system returns control of a car to them. The Lab’s mission, according to its website, is to “study the design and control of motion, especially as it relates to cars and vehicle safety. Our research blends analytical approaches to vehicle dynamics and control together with experiments in a variety of test vehicles and a healthy appreciation for the talents and demands of human drivers.” The results of the study were published on December 6 in the first edition of the journal Science Robotics.

Holly Russell, lead author of study and former graduate student at the Dynamic Design Lab says, “Many people have been doing research on paying attention and situation awareness. That’s very important. But, in addition, there is this physical change and we need to acknowledge that people’s performance might not be at its peak if they haven’t actively been participating in the driving.”

The report emphasizes that the DDL’s autonomous driving program is its own proprietary system and is not intended to mimic any particular autonomous driving system currently available from any automobile manufacturer, such as Tesla’s Autopilot.

The study found that the period of time known as “the handoff” — when the computer returns control of a car to a human driver — can be an especially risky period, especially if the speed of the vehicle has changed since the last time the person had direct control of the car. The amount of steering input required to accurately control a vehicle varies according to speed. Greater input is needed at slower speeds while less movement of the wheel is required at higher speeds.

Advertisement

People learn over time how to steer accurately at all speeds based on experience. But when some time elapses during which the driver is not directly involved in steering the car, the researchers found that drivers require a brief period of adjustment before they can accurately steer the car again. The greater the speed change while the computer is in control, the more erratic the human drivers were in their steering inputs upon resuming control.

“Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance,” said Lene Harbott, co-author of the research and a research associate in the Revs Program at Stanford.

Handoff From Computer to Human

The testing was done on a closed course. The participants drove for 15 seconds on a course that included a straightaway and a lane change. Then they took their hands off the wheel and the car took over, bringing them back to the start. After familiarizing themselves with the course four times, the researchers altered the steering ratio of the cars at the beginning of the next lap. The changes were designed to mimic the different steering inputs required at different speeds. The drivers then went around the course 10 more times.

Even though they were notified of the changes to the steering ratio, the drivers’ steering maneuvers differed significantly from their paths previous to the modifications during those ten laps. At the end, the steering ratios were returned to the original settings and the drivers drove 6 more laps around the course. Again the researchers found the drivers needed a period of adjustment to accurately steer the cars.

Advertisement

The DDL experiment is very similar to a classic neuroscience experiment that assesses motor adaptation. In one version, participants use a hand control to move a cursor on a screen to specific points. The way the cursor moves in response to their control is adjusted during the experiment and they, in turn, change their movements to make the cursor go where they want it to go.

Just as in the driving test, people who take part in the experiment have to adjust to changes in how the controller moves the cursor. They also must adjust a second time if the original response relationship is restored. People can performed this experiment themselves by adjusting the speed of the cursor on their personal computers.

“Even though there are really substantial differences between these classic experiments and the car trials, you can see this basic phenomena of adaptation and then after-effect of adaptation,” says IIana Nisky, another co-author of the study and a senior lecturer at Ben-Gurion University in Israel “What we learn in the laboratory studies of adaptation in neuroscience actually extends to real life.”

In neuroscience this is explained as a difference between explicit and implicit learning, Nisky explains. Even when a person is aware of a change, their implicit motor control is unaware of what that change means and can only figure out how to react through experience.

Advertisement

Federal and state regulators are currently working on guidelines that will apply to Level 5 autonomous cars. What the Stanford research shows is that until full autonomy becomes a reality, the “hand off” moment will represent a period of special risk, not because of any failing on the part of computers but rather because of limitations inherent in the brains of human drivers.

The best way to protect ourselves from that period of risk is to eliminate the “hand off” period entirely by ceding total control of driving to computers as soon as possible.

"I write about technology and the coming zero emissions revolution."

Advertisement
Comments

Elon Musk

Musk forces Judge’s exit from shareholder battles over viral social media slip-up

McCormick insisted in a court filing that she harbors no actual bias against Musk or the defendants. She claimed she either never clicked the “support” button, LinkedIn’s version of a “like,” or did so accidentally.

Published

on

(Credit: Tesla)

Many Tesla fans are familiar with the name Kathaleen McCormick, especially if they are investors in the company.

McCormick is a Delaware Chancery Court Judge who presided over Tesla CEO Elon Musk’s pay package lawsuit over the past few years, as well as his purchase of Twitter. However, she will no longer be sitting in on any issues related to Musk.

Elon Musk demands Delaware Judge recuse herself after ‘support’ post celebrating $2B court loss

In a rare admission of potential optics issues in one of America’s most powerful corporate courts, Delaware Chancery Court Chancellor Kathaleen McCormick stepped aside Monday from a cluster of shareholder lawsuits targeting Elon Musk and Tesla’s board.

Advertisement

The move came just days after Musk’s legal team highlighted her apparent “support” on LinkedIn for a post that mocked the billionaire over his 2022 tweets about the $44 billion Twitter acquisition.

McCormick insisted in a court filing that she harbors no actual bias against Musk or the defendants. She claimed she either never clicked the “support” button, LinkedIn’s version of a “like,” or did so accidentally.

She wrote in a newly published memo from the Delaware Chancery Court:

“The motion for recusal rests on a false premise — that I support a LinkedIn post about Mr. Musk, which I do not in fact support. I am not biased against the defendants in these actions.”

Advertisement

Yet she granted the reassignment anyway, acknowledging that the intense media scrutiny surrounding her involvement had become “detrimental to the administration of justice.”

The consolidated cases will now be handled by three of her colleagues on the Delaware Court of Chancery, the nation’s go-to venue for high-stakes corporate disputes. The lawsuits accuse Musk and Tesla directors of breaching fiduciary duties through lavish executive compensation and lax governance oversight.

One prominent claim, filed by a Detroit pension fund, challenges massive stock awards granted to board members, alleging the payouts harmed the company. The litigation also overlaps with issues stemming from Musk’s turbulent 2022 Twitter purchase.

McCormick’s history with Musk made her a lightning rod. In 2022, she presided over the fast-tracked lawsuit that ultimately forced Musk to complete the Twitter deal after he tried to back out.

Advertisement

Then in 2024, she struck down his record $56 billion Tesla compensation package, ruling the approval process was flawed and overly CEO-friendly. The Delaware Supreme Court later reinstated the pay on technical grounds, but the ruling fueled Musk’s long-standing criticism of the state’s judiciary.

Musk has repeatedly urged companies to reincorporate elsewhere, arguing Delaware courts have grown hostile to visionary leaders. Monday’s recusal hands him a symbolic victory and underscores how personal social-media activity can collide with judicial impartiality standards.

Delaware law requires judges to step aside if there’s even a “reasonable basis” to question their neutrality.

Court watchers say the episode highlights growing tensions in corporate America’s legal epicenter. While McCormick maintained her impartiality, the appearance of bias proved too costly to ignore. The cases will proceed without her, but the broader debate over Delaware’s dominance in business litigation is far from over.

Advertisement
Continue Reading

Elon Musk

Elon Musk has generous TSA offer denied by the White House: here’s why

Musk stepped in on March 21 via a post on X, writing: “I would like to offer to pay the salaries of TSA personnel during this funding impasse that is negatively affecting the lives of so many Americans at airports throughout the country.”

Published

on

Gage Skidmore, CC BY-SA 4.0 , via Wikimedia Commons

Tesla and SpaceX CEO Elon Musk made a generous offer to pay the salaries of Transportation Security Administration (TSA) employees last week, but the offer was denied by the White House.

In a striking display of private-sector initiative clashing with federal bureaucracy, the White House has turned down an offer from Elon Musk to personally cover the salaries of TSA officers amid an ongoing partial government shutdown. The rejection, reported last Wednesday by multiple outlets, highlights the legal and political hurdles facing unconventional solutions to Washington’s funding gridlock.

The impasse began weeks ago when Congress failed to pass funding for the Department of Homeland Security (DHS), leaving TSA employees, essential workers who screen millions of travelers daily, without paychecks while still required to report for duty.

Frustrated travelers have endured record-long security lines at major airports, with reports of chaos and delays rippling across the country.

Advertisement

Musk stepped in on March 21 via a post on X, writing: “I would like to offer to pay the salaries of TSA personnel during this funding impasse that is negatively affecting the lives of so many Americans at airports throughout the country.”

But it was not for no reason.

Advertisement

White House spokesperson Abigail Jackson responded on behalf of the Trump administration, expressing appreciation for Musk’s gesture.

However, the legal obstacles, which would be insurmountable, would inhibit Musk from doing so. Jackson said:

“We greatly appreciate Elon’s generous offer. This would pose great legal challenges due to his involvement with federal government contracts.”

Musk’s companies hold significant federal contracts, including NASA launches through SpaceX and potential Defense Department work, raising concerns about conflicts of interest, ethics rules, and anti-bribery statutes that prohibit private payments to government employees. Administration officials also indicated they expect the shutdown to end soon, making external funding unnecessary.

Advertisement

The episode underscores deeper tensions in Washington. Musk, who has advised on government efficiency efforts and maintains a close relationship with President Trump, has frequently criticized wasteful spending and bureaucratic delays.

His offer came as airport security lines ballooned, drawing public frustration toward both parties. TSA officers, many of whom rely on paychecks to cover mortgages and family expenses, have continued working without compensation, a situation that has drawn bipartisan concern but little immediate resolution.

Critics of the rejection argue it prioritizes red tape over practical relief for frontline workers and travelers. Supporters of the White House position counter that allowing private funding sets a dangerous precedent and could undermine congressional authority over the budget.

The White House eventually came to terms with the TSA on Friday and started paying them once again, and lines at airports instantly shrank.  The Department of Homeland Security (DHS) said that TSA staf would begin receiving paychecks “as early as” today.

Advertisement
Continue Reading

Elon Musk

Tesla FSD mocks BMW human driver: Saves pedestrian from near miss

Tesla FSD anticipated a BMW driver’s lane drift before the human behind the wheel could react.

Published

on

By

A video posted to r/TeslaFSD this week put a sharp spotlight on Tesla’s Full Self-Driving (FSD) software being able to react to pedestrian intent than an actual human driver behind the wheel. In the Reddit clip, a BMW driver can be seen rolling through a neighborhood street completely unaware of a pedestrian stepping in to cross. At the same time, a Tesla  driving on FSD had already begun slowing down before the pedestrian even began their attempt to cross the street The BMW kept moving, prompting the pedestrian to hop back, while the Tesla came to a stop and provide right-of-way for the human to safely cross.

That gap between what the BMW driver saw and what FSD had already processed is the story. Tesla FSD wasn’t reacting to a person in the street, rather it was reading the signals that a person was about to enter it based on the pedestrian’s movement, trajectory, and their trajectory to telegraph intent.

Tesla’s FSD is now built on an end-to-end neural network trained on billions of real-world miles, learning to interpret subtle human behavioral cues the same way an experienced human driver does instinctively. The difference is consistency. A human driver distracted for two seconds misses what FSD does not.

Tesla sues California DMV over Autopilot and FSD advertising ruling

Advertisement

Reddit commenters in the thread were blunt about the BMW driver’s failure, with several pointing out that the pedestrian was visible well before the crossing. One response put it plainly that the car on FSD saw the situation developing before the human in the other car had registered there was a situation at all.

Tesla has published data showing FSD (Supervised) is 54% safer than a human driver, accumulated across billions of miles driven on the system. Elon Musk has said FSD v14 will outperform human drivers by a factor of two to three, and that v15 has “a shot” at a 10x improvement. Pedestrian safety is where the stakes are highest, and where intent prediction closes the gap fastest. At 30 mph, a car covers roughly 44 feet per second. An extra second of awareness from reading a person’s body language rather than waiting for them to step out is often the difference between a near miss and a fatality.

Video and community discussion: r/TeslaFSD on Reddit

FSD saves man from becoming a pancake. BMW driver nearly flattens him.
by
u/Qwertygolol in
TeslaFSD

Advertisement
Continue Reading