Connect with us

News

Stanford studies human impact when self-driving car returns control to driver

Published

on

Tesla Autopilot in 'Shadow Mode' will pit human vs computer

Researchers involved with the Stanford University Dynamic Design Lab have completed a study that examines how human drivers respond when an autonomous driving system returns control of a car to them. The Lab’s mission, according to its website, is to “study the design and control of motion, especially as it relates to cars and vehicle safety. Our research blends analytical approaches to vehicle dynamics and control together with experiments in a variety of test vehicles and a healthy appreciation for the talents and demands of human drivers.” The results of the study were published on December 6 in the first edition of the journal Science Robotics.

Holly Russell, lead author of study and former graduate student at the Dynamic Design Lab says, “Many people have been doing research on paying attention and situation awareness. That’s very important. But, in addition, there is this physical change and we need to acknowledge that people’s performance might not be at its peak if they haven’t actively been participating in the driving.”

The report emphasizes that the DDL’s autonomous driving program is its own proprietary system and is not intended to mimic any particular autonomous driving system currently available from any automobile manufacturer, such as Tesla’s Autopilot.

The study found that the period of time known as “the handoff” — when the computer returns control of a car to a human driver — can be an especially risky period, especially if the speed of the vehicle has changed since the last time the person had direct control of the car. The amount of steering input required to accurately control a vehicle varies according to speed. Greater input is needed at slower speeds while less movement of the wheel is required at higher speeds.

Advertisement

People learn over time how to steer accurately at all speeds based on experience. But when some time elapses during which the driver is not directly involved in steering the car, the researchers found that drivers require a brief period of adjustment before they can accurately steer the car again. The greater the speed change while the computer is in control, the more erratic the human drivers were in their steering inputs upon resuming control.

“Even knowing about the change, being able to make a plan and do some explicit motor planning for how to compensate, you still saw a very different steering behavior and compromised performance,” said Lene Harbott, co-author of the research and a research associate in the Revs Program at Stanford.

Handoff From Computer to Human

The testing was done on a closed course. The participants drove for 15 seconds on a course that included a straightaway and a lane change. Then they took their hands off the wheel and the car took over, bringing them back to the start. After familiarizing themselves with the course four times, the researchers altered the steering ratio of the cars at the beginning of the next lap. The changes were designed to mimic the different steering inputs required at different speeds. The drivers then went around the course 10 more times.

Even though they were notified of the changes to the steering ratio, the drivers’ steering maneuvers differed significantly from their paths previous to the modifications during those ten laps. At the end, the steering ratios were returned to the original settings and the drivers drove 6 more laps around the course. Again the researchers found the drivers needed a period of adjustment to accurately steer the cars.

Advertisement

The DDL experiment is very similar to a classic neuroscience experiment that assesses motor adaptation. In one version, participants use a hand control to move a cursor on a screen to specific points. The way the cursor moves in response to their control is adjusted during the experiment and they, in turn, change their movements to make the cursor go where they want it to go.

Just as in the driving test, people who take part in the experiment have to adjust to changes in how the controller moves the cursor. They also must adjust a second time if the original response relationship is restored. People can performed this experiment themselves by adjusting the speed of the cursor on their personal computers.

“Even though there are really substantial differences between these classic experiments and the car trials, you can see this basic phenomena of adaptation and then after-effect of adaptation,” says IIana Nisky, another co-author of the study and a senior lecturer at Ben-Gurion University in Israel “What we learn in the laboratory studies of adaptation in neuroscience actually extends to real life.”

In neuroscience this is explained as a difference between explicit and implicit learning, Nisky explains. Even when a person is aware of a change, their implicit motor control is unaware of what that change means and can only figure out how to react through experience.

Advertisement

Federal and state regulators are currently working on guidelines that will apply to Level 5 autonomous cars. What the Stanford research shows is that until full autonomy becomes a reality, the “hand off” moment will represent a period of special risk, not because of any failing on the part of computers but rather because of limitations inherent in the brains of human drivers.

The best way to protect ourselves from that period of risk is to eliminate the “hand off” period entirely by ceding total control of driving to computers as soon as possible.

"I write about technology and the coming zero emissions revolution."

Advertisement
Comments

News

Ford is charging for a basic EV feature on the Mustang Mach-E

When ordering a new Ford Mustang Mach-E, you’ll now be hit with an additional fee for one basic EV feature: the frunk.

Published

on

Credit: Ford Motor Company

Ford is charging an additional fee for a basic EV feature on its Mustang Mach-E, its most popular electric vehicle offering.

Ford has shuttered its initial Model e program, but is venturing into a more controlled and refined effort, and it is abandoning the F-150 Lightning in favor of a new pickup that is currently under design, but appears to have some favorable features.

However, ordering a new Mustang Mach-E now comes with an additional fee for one basic EV feature: the frunk.

The frunk is the front trunk, and due to the lack of a large engine in the front of an electric vehicle, OEMs are able to offer additional storage space under the hood. There’s one problem, though, and that is that companies appear to be recognizing that they can remove it for free while offering the function for a fee.

Advertisement

Ford is charging $495 for the frunk.

Interestingly, the frunk size varies by vehicle, but the Mustang Mach-E features a 4.7 to 4.8 cubic-foot-sized frunk, which measures approximately 9 inches deep, 26 inches wide, and 14 inches high.

Advertisement

When the vehicle was first released, Ford marketed the frunk as the ultimate tailgating feature, showing it off as a perfect place to store and serve cold shrimp cocktail.

Ford Mach-E frunk is perfect for chowders and chicken wings, and we’re not even joking

It appears the decision to charge for what is a simple advantage of an EV is not going over well, as even Ford loyal customers say the frunk is a “basic expectation” of an EV. Without it, it seems as if fans feel the company is nickel-and-diming its customers.

It will be pretty interesting to see the Mach-E without a frunk, and while it should not be enough to turn people away from potentially buying the vehicle, it seems the decision to add an additional charge to include one will definitely annoy some customers.

Advertisement
Continue Reading

News

Tesla to improve one of its best features, coding shows

According to the update, Tesla will work on improving the headlights when coming into contact with highly reflective objects, including road signs, traffic signs, and street lights. Additionally, pixel-level dimming will happen in two stages, whereas it currently performs with just one, meaning on or off.

Published

on

Credit: @jojje167 on X

Tesla is looking to upgrade its Matrix Headlights, a unique and high-tech feature that is available on several of its vehicles. The headlights aim to maximize visibility for Tesla drivers while being considerate of oncoming traffic.

The Matrix Headlights Tesla offers utilize dimming of individual light pixels to ensure that visibility stays high for those behind the wheel, while also being considerate of other cars by decreasing the brightness in areas where other cars are traveling.

Here’s what they look like in action:

As you can see, the Matrix headlight system intentionally dims the area where oncoming cars would be impacted by high beams. This keeps visibility at a maximum for everyone on the road, including those who could be hit with bright lights in their eyes.

Advertisement

There are still a handful of complaints from owners, however, but Tesla appears to be looking to resolve these with the coming updates in a Software Version that is currently labeled 2026.2.xxx. The coding was spotted by X user BERKANT:

Advertisement

According to the update, Tesla will work on improving the headlights when coming into contact with highly reflective objects, including road signs, traffic signs, and street lights. Additionally, pixel-level dimming will happen in two stages, whereas it currently performs with just one, meaning on or off.

Finally, the new system will prevent the high beams from glaring back at the driver. The system is made to dim when it recognizes oncoming cars, but not necessarily objects that could produce glaring issues back at the driver.

Tesla’s revolutionary Matrix headlights are coming to the U.S.

This upgrade is software-focused, so there will not need to be any physical changes or upgrades made to Tesla vehicles that utilize the Matrix headlights currently.

Advertisement
Continue Reading

Elon Musk

xAI’s Grok approved for Pentagon classified systems: report

Under the agreement, Grok can be deployed in systems handling classified intelligence analysis, weapons development, and battlefield operations. 

Published

on

xAI-supercomputer-memphis-environment-pushback
Credit: xAI

Elon Musk’s xAI has signed an agreement with the United States Department of Defense (DoD) to allow Grok to be used in classified military systems.

Previously, Anthropic’s Claude had been the only AI system approved for the most sensitive military work, but a dispute over usage safeguards has reportedly prompted the Pentagon to broaden its options, as noted in a report from Axios.

Under the agreement, Grok can be deployed in systems handling classified intelligence analysis, weapons development, and battlefield operations. 

The publication reported that xAI agreed to the Pentagon’s requirement that its technology be usable for “all lawful purposes,” a standard Anthropic has reportedly resisted due to alleged ethical restrictions tied to mass surveillance and autonomous weapons use.

Advertisement

Defense Secretary Pete Hegseth is scheduled to meet with Anthropic CEO Dario Amodei in what sources expect to be a tense meeting, with the publication hinting that the Pentagon could designate Anthropic a “supply chain risk” if the company does not lift its safeguards. 

Axios stated that replacing Claude fully might be technically challenging even if xAI or other alternative AI systems take its place. That being said, other AI systems are already in use by the DoD. 

Grok already operates in the Pentagon’s unclassified systems alongside Google’s Gemini and OpenAI’s ChatGPT. Google is reportedly close to an agreement that will result in Gemini being used for classified use, while OpenAI’s progress toward classified deployment is described as slower but still feasible. 

The publication noted that the Pentagon continues talks with several AI companies as it prepares for potential changes in classified AI sourcing.

Advertisement
Continue Reading