Connect with us

News

Stanford studies human impact when self-driving car returns control to driver

Published

on

Tesla Autopilot in 'Shadow Mode' will pit human vs computer

Researchers involved with the Stanford University Dynamic Design Lab have completed a study that examines how human drivers respond when an autonomous driving system returns control of a car to them. The Lab’s mission, according to its website, is to “study the design and control of motion, especially as it relates to cars and vehicle safety. Our research blends analytical approaches to vehicle dynamics and control together with experiments in a variety of test vehicles and a healthy appreciation for the talents and demands of human drivers.” The results of the study were published on December 6 in the first edition of the journal Science Robotics.

Holly Russell, lead author of study and former graduate student at the Dynamic Design Lab says, “Many people have been doing research on paying attention and situation awareness. That’s very important. But, in addition, there is this physical change and we need to acknowledge that people’s performance might not be at its peak if they haven’t actively been participating in the driving.”

The report emphasizes that the DDL’s autonomous driving program is its own proprietary system and is not intended to mimic any particular autonomous driving system currently available from any automobile manufacturer, such as Tesla’s Autopilot.

The study found that the period of time known as “the handoff” — when the computer returns control of a car to a human driver — can be an especially risky period, especially if the speed of the vehicle has changed since the last time the person had direct control of the car. The amount of steering input required to accurately control a vehicle varies according to speed. Greater input is needed at slower speeds while less movement of the wheel is required at higher speeds.

Advertisement

People learn over time how to steer accurately at all speeds based on experience. But when some time elapses during which the driver is not directly involved in steering the car, the researchers found that drivers require a brief period of adjustment before they can accurately steer the car again. The greater the speed change while the computer is in control, the more erratic the human drivers were in their steering inputs upon resuming control.

“Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance,” said Lene Harbott, co-author of the research and a research associate in the Revs Program at Stanford.

Handoff From Computer to Human

The testing was done on a closed course. The participants drove for 15 seconds on a course that included a straightaway and a lane change. Then they took their hands off the wheel and the car took over, bringing them back to the start. After familiarizing themselves with the course four times, the researchers altered the steering ratio of the cars at the beginning of the next lap. The changes were designed to mimic the different steering inputs required at different speeds. The drivers then went around the course 10 more times.

Even though they were notified of the changes to the steering ratio, the drivers’ steering maneuvers differed significantly from their paths previous to the modifications during those ten laps. At the end, the steering ratios were returned to the original settings and the drivers drove 6 more laps around the course. Again the researchers found the drivers needed a period of adjustment to accurately steer the cars.

Advertisement

The DDL experiment is very similar to a classic neuroscience experiment that assesses motor adaptation. In one version, participants use a hand control to move a cursor on a screen to specific points. The way the cursor moves in response to their control is adjusted during the experiment and they, in turn, change their movements to make the cursor go where they want it to go.

Just as in the driving test, people who take part in the experiment have to adjust to changes in how the controller moves the cursor. They also must adjust a second time if the original response relationship is restored. People can performed this experiment themselves by adjusting the speed of the cursor on their personal computers.

“Even though there are really substantial differences between these classic experiments and the car trials, you can see this basic phenomena of adaptation and then after-effect of adaptation,” says IIana Nisky, another co-author of the study and a senior lecturer at Ben-Gurion University in Israel “What we learn in the laboratory studies of adaptation in neuroscience actually extends to real life.”

In neuroscience this is explained as a difference between explicit and implicit learning, Nisky explains. Even when a person is aware of a change, their implicit motor control is unaware of what that change means and can only figure out how to react through experience.

Advertisement

Federal and state regulators are currently working on guidelines that will apply to Level 5 autonomous cars. What the Stanford research shows is that until full autonomy becomes a reality, the “hand off” moment will represent a period of special risk, not because of any failing on the part of computers but rather because of limitations inherent in the brains of human drivers.

The best way to protect ourselves from that period of risk is to eliminate the “hand off” period entirely by ceding total control of driving to computers as soon as possible.

"I write about technology and the coming zero emissions revolution."

Advertisement
Comments

News

Tesla Model Y prices just went up for the first time in two years

Published

on

Credit: Tesla Asia | X

Tesla just raised Model Y prices for the first time in two years, with the largest increase being $1,000.

The move signals shifting dynamics in the competitive electric vehicle market as the company continues to work on balancing demand, profitability, and accessibility.

The new pricing affects premium trims while leaving entry-level options unchanged. The Model Y Premium Rear-Wheel Drive (RWD) now starts at $45,990, a $1,000 increase.

The Model Y Premium All-Wheel Drive (AWD)—previously referred to in the post as simply “Model Y AWD”—rises to $49,990, also up $1,000. The top-tier Model Y Performance sees a more modest $500 bump, bringing its starting price to $57,990.

Advertisement

Base models remain untouched to preserve affordability. The entry-level Model Y RWD holds steady at $39,990, and the base Model Y AWD stays at $41,990. This selective approach keeps the crossover accessible for budget-conscious buyers while extracting more revenue from higher-margin configurations.

Advertisement

After years of aggressive price cuts to stimulate volume amid slowing EV adoption and rising competition from rivals like BYD, Ford, and GM, Tesla appears confident in underlying demand. Recent lineup refreshes for the 2026 Model Y, including refreshed styling and efficiency gains, have helped maintain its status as America’s best-selling EV.

By protecting base prices, Tesla avoids alienating price-sensitive customers while improving margins on the more popular variants.

Tesla Model Y ownership review after six months: What I love and what I don’t

For consumers, the changes are relatively modest—under 3% on affected trims—and still position the Model Y competitively against gas-powered SUVs in the same class. Federal tax credits and potential state incentives may further offset costs for eligible buyers.

Advertisement

This marks a subtle but notable shift from the deep discounting era that defined much of 2024 and 2025. As the EV market matures into 2026, Tesla’s pricing strategy will be closely watched for clues about production ramps, new variants like the rumored longer-wheelbase Model Y, and broader profitability goals.

In short, today’s adjustment reflects a company that remains dominant yet pragmatic—willing to test higher pricing where demand supports it. It is unlikely to deter consumers from choosing other options.

Continue Reading

Elon Musk

Elon Musk explains why he cannot be fired from SpaceX

Published

on

Credit: SpaceX

Elon Musk cannot be fired from SpaceX, and there’s a reason for that.

In a blunt post on X on Friday, Elon Musk confirmed plans to structurally shield his leadership at SpaceX, ensuring he cannot be fired while tying a potential trillion-dollar compensation package to the company’s long-term goal of establishing a self-sustaining colony on Mars.

The revelation stems from a Financial Times report detailing SpaceX’s intention to restructure its governance and compensation framework. The moves are designed to protect Musk’s control and align his incentives with the company’s founding mission rather than short-term financial pressures. Musk’s reply left no ambiguity:

“Yes, I need to make sure SpaceX stays focused on making life multiplanetary and extending consciousness to the stars, not pandering to someone’s bullshit quarterly earnings bonus!”

He added that success in this “absurdly difficult goal” would generate value “many orders of magnitude more than the economy of Earth,” though he cautioned that the journey will not be smooth. “Don’t expect entirely smooth sailing along the way,” Musk wrote.

Advertisement

The strategy reflects Musk’s deep concerns about how public-market expectations could derail SpaceX’s core objective. Founded in 2002, SpaceX has repeatedly stated its purpose is to reduce the cost of space travel and ultimately make humanity a multiplanetary species.

Unlike Tesla, which went public in 2010 and has faced repeated battles over Musk’s compensation and board influence, SpaceX remains privately held. Musk has long resisted taking the rocket company public precisely to avoid the quarterly earnings treadmill that forces most CEOs to prioritize short-term stock performance over ambitious, high-risk projects.

By embedding protections against his removal and linking any outsized pay package to verifiable milestones—such as a functioning Mars colony—SpaceX aims to insulate its leadership from activist investors or board members who might demand faster profits or safer bets.

SpaceX Board has set a Mars bonus for Elon Musk

Advertisement

Musk has referenced past experiences, including his ouster from OpenAI and shareholder lawsuits at Tesla, as cautionary tales. In those cases, he argued, external pressures risked diluting the original vision.

Critics may view the arrangement as excessive, especially given Musk’s already substantial voting power and wealth. Supporters, however, argue it is a necessary safeguard for a company pursuing goals measured in decades rather than quarters. Achieving a Mars colony would require sustained investment in Starship development, orbital refueling, life-support systems, and in-situ resource utilization—technologies that may deliver no immediate financial return.

Musk’s post underscores a broader philosophical point: true breakthrough innovation often demands tolerance for volatility and a willingness to ignore conventional business wisdom. As SpaceX prepares for increasingly ambitious Starship test flights and eventual crewed missions, the new governance structure signals that the company’s North Star remains unchanged—humanity’s expansion beyond Earth.

Whether the trillion-dollar package materializes depends on execution, but Musk’s message is clear: SpaceX exists to reach the stars, not to chase the next earnings beat. For investors or employees who share that vision, the protections are not a perk—they are a prerequisite for success.

Advertisement
Continue Reading

News

Tesla discloses two Robotaxi crashes to NHTSA

Newly unredacted data filed with the National Highway Traffic Safety Administration (NHTSA) reveals the two incidents. 

Published

on

Tesla has disclosed information on two low-speed crashes that occurred in Austin with its Robotaxi platform. These incidents occurred with teleoperators steering the vehicle, and there were no passengers in the car at the time they happened.

Newly unredacted data filed with the National Highway Traffic Safety Administration (NHTSA) reveals the two incidents.

The first crash took place in July 2025, shortly after Tesla launched its nascent Robotaxi network in Austin. The ADS reportedly struggled to move forward while stopped on a street. A teleoperator assumed control, gradually accelerating and turning left toward the roadside. The vehicle then mounted the curb and struck a metal fence.

In the second incident, in January 2026, the ADS was traveling straight when the safety monitor requested navigation support. The teleoperator took over from a stop, continued forward, and collided with a temporary construction barricade at approximately 9 mph, scraping the front-left fender and tire.

Advertisement

Tesla Robotaxi service in Austin achieves monumental new accomplishment

Tesla has previously told lawmakers that teleoperators are authorized to pilot vehicles remotely—but only at speeds below 10 mph, as the only maneuvers they were approved to perform were repositioning in awkward areas.

“This capability enables Tesla to promptly move a vehicle that may be in a compromising position, thereby mitigating the need to wait for a first responder or Tesla field representative to manually recover the vehicle,” the company stated in filings earlier this year.

Before this week, Tesla redacted the NHTSA reports, but they decided to reveal all 17 Robotaxi incidents recorded since the launch in Austin last Summer. Most of the other crashes involved the Tesla being struck by other road users and were not caused by the self-driving suite itself.

Advertisement

There were other incidents, including two additional self-caused accidents involving the ADS clipping side mirrors on parked cars. In September 2025, one Robotaxi struck a dog that darted into the roadway (the dog escaped unharmed), while another made an unprotected left turn into a parking lot and hit a metal chain.

Although Waymo and Zoox have reported more total crashes, Tesla operates at a far smaller scale. The cautious pace reflects the company’s broader safety concerns; it has been very slow with the Robotaxi rollout to ensure the suite is ready for operation.

Last month, CEO Elon Musk acknowledged that “making sure things are completely safe” remains the primary bottleneck to expanding the network, describing the company’s approach as “very cautious.”

The unredacted filings arrive amid heightened regulatory scrutiny of autonomous vehicles. NHTSA recently closed a separate probe into Tesla’s Full Self-Driving software repeatedly striking parking-lot obstacles such as bollards and chains—a problem that also prompted a recall at Waymo last year.

Advertisement

Tesla Robotaxi has been a widely successful program in its early days of operation, and the transparency Tesla brings here is greatly appreciated. Incidents will happen, of course, but the honesty gives customers and regulators a sense of where Tesla is in terms of developing its self-driving and fully autonomous ride-hailing suite.

Continue Reading