Connect with us
tesla autopilot tesla autopilot

News

Tesla posts stern response to Washington Post’s article on alleged Autopilot dangers

(Credit: Tesla)

Published

on

Tesla has posted a stern response to a recent article from The Washington Post that suggested that the electric vehicle maker is putting people at risk because it allows systems like Autopilot to be deployed in areas that it was not designed for. The publication noted that it was able to identify about 40 fatal or serious crashes since 2016, and at least eight of them happened in roads where Autopilot was not designed to be used in the first place. 

Overall, the Washington Post article argued that while Tesla does inform drivers that they are responsible for their vehicles while Autopilot is engaged, the company is nonetheless also at fault since it allows its driver-assist system to be deployed irresponsibly. “Even though the company has the technical ability to limit Autopilot’s availability by geography, it has taken few definitive steps to restrict use of the software,” the article read. 

In its response, which was posted through its official account on X, Tesla highlighted that it is very serious about keeping both its customers and pedestrians safe. The company noted that the data is clear about the fact that systems like Autopilot, when used safety, drastically reduce the number of accidents on the road. The company also reiterated the fact that features like Traffic Aware Cruise Control are Level 2 systems, which require constant supervision from the driver. 

Following is the pertinent section of Tesla’s response.

Advertisement

While there are many articles that do not accurately convey the nature of our safety systems, the recent Washington Post article is particularly egregious in its misstatements and lack of relevant context. 

We at Tesla believe that we have a moral obligation to continue improving our already best-in-class safety systems. At the same time, we also believe it is morally indefensible not to make these systems available to a wider set of consumers, given the incontrovertible data that shows it is saving lives and preventing injury. 

Regulators around the globe have a duty to protect consumers, and the Tesla team looks forward to continuing our work with them towards our common goal of eliminating as many deaths and injuries as possible on our roadways. 

Below are some important facts, context and background.

Advertisement

Background

1. Safety metrics are emphatically stronger when Autopilot is engaged than when not engaged.

a. In the 4th quarter of 2022, we recorded one crash for every 4.85 million miles driven in which drivers were using Autopilot technology. For drivers who were not using Autopilot technology, we recorded one crash for every 1.40 million miles driven. By comparison, the most recent data available from NHTSA and FHWA (from 2021) shows that in the United States there was an automobile crash approximately every 652,000 miles.

b. The data is clear: The more automation technology offered to support the driver, the safer the driver and other road users. Anecdotes from the WaPo article come from plaintiff attorneys—cases involving significant driver misuse—and are not a substitute for rigorous analysis and billions of miles of data.

Advertisement

c. Recent Data continues this trend and is even more compelling. Autopilot is ~10X safer than US average and ~5X safer than a Tesla with no AP tech enabled. More detailed information will be publicly available in the near future.

2. Autopilot features, including Traffic-Aware Cruise Control and Autosteer, are SAE Level 2 driver-assist systems, meaning –

a. Whether the driver chooses to engage Autosteer or not, the driver is in control of the vehicle at all times. The driver is notified of this responsibility, consents, agrees to monitor the driving assistance, and can disengage anytime.

b. Despite the driver being responsible for control for the vehicle, Tesla has a number of additional safety measures designed to monitor that drivers engage in active driver supervision, including torque-based and camera-based monitoring. We have continued to make progress in improving these monitoring systems to reduce misuse.

Advertisement

c. Based on the above, among other factors, the data strongly indicates our customers are far safer by having the choice to decide when it is appropriate to engage Autopilot features. When used properly, it provides safety benefits on all road classes.

Tesla also provided some context about some of the crashes that were highlighted by The Washington Post. As per the electric vehicle maker, the incidents that the publication cited involved drivers who were not using Autopilot correctly. The publication, therefore, omitted several important facts when it was framing its narrative around Autopilot’s alleged risks, Tesla argued. 

Following is the pertinent section of Tesla’s response.

The Washington Post leverages instances of driver misuse of the Autopilot driver assist feature to suggest the system is the problem. The article got it wrong, misreporting what’s actually alleged in the pending lawsuit and omitting several important facts:

Advertisement

1. Contrary to the Post article, the Complaint doesn’t reference complacency or Operational Design Domain.

2. Instead, the Complaint acknowledges the harms of driver inattention, misuse, and negligence.

3. Mr. Angulo and the parents of Ms. Benavides who tragically died in the crash, first sued the Tesla driver—and settled with him—before ever pursuing a claim against Tesla.

4. The Benavides lawsuit alleges the Tesla driver “carelessly and/or recklessly” “drove through the intersection…ignoring the controlling stop sign and traffic signal.”

Advertisement

5. The Tesla driver didn’t blame Tesla, didn’t sue Tesla, didn’t try to get Tesla to pay on his behalf. He took responsibility.

6. The Post had the driver’s statements to police and reports that he said he was “driving on cruise.” They omit that he also admitted to police “I expect to be the driver and be responsible for this.”

7. The driver later testified in the litigation he knew Autopilot didn’t make the car self-driving and he was the driver, contrary to the Post and Angulo claims that he was mislead, over-reliant or complacent. He readily and repeatedly admitted:

a. “I was highly aware that was still my responsibility to operate the vehicle safely.”

Advertisement

b. He agreed it was his “responsibility as the driver of the vehicle, even with Autopilot activated, to drive safely and be in control of the vehicle at all times.”

c. “I would say specifically I was aware that the car was my responsibility. I didn’t read all these statements and passages, but I’m aware the car was my responsibility.”

8. The Post also failed to disclose that Autopilot restricted the vehicle’s speed to 45 mph (the speed limit) based on the road type, but the driver was pressing the accelerator to maintain 60 mph when he ran the stop sign and caused the crash. The car displayed an alert to the driver that, because he was overriding Autopilot with the accelerator, “Cruise control will not brake.”


Don’t hesitate to contact us with news tips. Just send a message to simon@teslarati.com to give us a heads up.

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Tesla confirmed HW3 can’t do Unsupervised FSD but there’s more to the story

Tesla confirmed HW3 vehicles cannot run unsupervised FSD, replacing its free upgrade promise with a discounted trade-in.

Published

on

By

tesla autopilot

Tesla has officially confirmed that early vehicles with its Autopilot Hardware 3 (HW3) will not be capable of unsupervised Full Self-Driving, while extending a path forward for legacy owners through a discounted trade-in program. The announcement came by way of Elon Musk in today’s Tesla Q1 2026 earnings call.

The history here matters. HW3 launched in April 2019, and Tesla sold Full Self-Driving packages to owners on the understanding that the hardware was sufficient for full autonomy. Some owners paid between $8,000 and $15,000 for FSD during that period. For years, as FSD’s AI models grew more demanding, HW3 vehicles fell progressively further behind, eventually landing on FSD v12.6 in January 2025 while AI4 vehicles moved to v13 and then v14. When Musk acknowledged in January 2025 that HW3 simply could not reach unsupervised operation, and alluded to a difficult hardware retrofit.

Advertisement

The near-term offering is more concrete. Tesla’s head of Autopilot Ashok Elluswamy confirmed on today’s call that a V14-lite will be coming to HW3 vehicles in late June, bringing all the V14 features currently running on AI4 hardware. That is a meaningful software update for owners who have been frozen at v12.6 for over a year, and it represents genuine effort to keep older hardware relevant. Unsupervised FSD for vehicles is now targeted for Q4 2026 at the earliest, with Musk describing it as a gradual, geography-limited rollout.

For HW3 owners, the over-the-air V14-lite update is welcomed, and the discounted trade-in path at least acknowledges an old obligation. What happens next with the trade-in pricing will define how this chapter ultimately gets written. If Tesla prices the hardware path fairly, acknowledges what early adopters are owed, and delivers V14-lite on the June timeline it committed to today, it has a real opportunity to convert one of the longest-running sore subjects among early adopters into a loyalty story.

Continue Reading

Elon Musk

Tesla isn’t joking about building Optimus at an industrial scale: Here we go

Tesla’s Optimus factory in Texas targets 10 million robots yearly, with 5.2 million square feet under construction.

Published

on

By

Tesla’s Q1 2026 Update Letter, released today, confirms that first generation Optimus production lines are now well underway at its Fremont, California factory, with a pilot line targeting one million robots per year to start. Of bigger note is a shared aerial image of a large piece of land adjacent to Gigafactory Texas, that Tesla has prominently labeled “Optimus factory site preparation.”

Permit documents show Tesla is seeking to add over 5.2 million square feet of new building space to the Giga Texas North Campus by the end of 2026, at an estimated construction investment of $5 billion to $10 billion. The longer term production target for that facility is 10 million Optimus units per year. Giga Texas already sits on 2,500 acres with over 10 million square feet of existing factory floor, and the North Campus expansion is being built to support multiple projects, including the dedicated Optimus factory, the Terafab chip fabrication facility (a joint Tesla/SpaceX/xAI venture), a Cybercab test track, road infrastructure, and supporting facilities.

Credit: TESLA

Texas makes strategic sense beyond the existing infrastructure. The state’s tax structure, lower labor costs relative to California, and the proximity to Tesla’s AI training cluster Cortex 1 and 2, both located at Giga Texas and now totaling over 230,000 H100 equivalent GPUs, means the Optimus software stack and the factory producing the hardware will share the same campus. Tesla’s Q1 report also confirmed completion of the AI5 chip tape out in April, the inference processor designed specifically to power Optimus units in the field.

As Teslarati reported, the Texas facility is intended to house Optimus V4 production at full scale. Musk told the World Economic Forum in January that Tesla plans to sell Optimus to the public by end of 2027 at a price between $20,000 and $30,000, stating, “I think everyone on earth is going to have one and want one.” He has previously pegged long term demand for general purpose humanoid robots at over 20 billion units globally, citing both consumer and industrial use cases.

Advertisement
Continue Reading

Investor's Corner

Tesla (TSLA) Q1 2026 earnings results: beat on EPS and revenues

Published

on

Credit: Tesla

Tesla (NASDAQ: TSLA) reported its earnings for the first quarter of 2026 on Wednesday afternoon. Here’s what the company reported compared to what Wall Street analysts expected.

The earnings results come after Tesla reported a miss on vehicle deliveries for the first quarter, delivering 358,023 vehicles and building 408,386 cars during the three-month span.

As Tesla transitions more toward AI and sees itself as less of a car company, expectations for deliveries will begin to become less of a central point in the consensus of how the quarter is perceived.

Nevertheless, Tesla is leaning on its strong foundation as a car company to carry forward its AI ambitions. The first quarter is a good ground layer for the rest of the year.

Advertisement

Tesla Q1 2026 Earnings Results

Tesla’s Earnings Results are as follows:

  • Non-GAAP EPS – $0.41 Reported vs. $0.36 Expected
  • Revenues – $22.387 billion vs. $22.35 billion Expected
  • Free Cash Flow – $1.444 billion
  • Profit – $4.72 billion

Tesla beat analyst expectations, so it will be interesting to see how the stock responds. IN the past, we’ve seen Tesla beat analyst expectations considerably, followed by a sharp drop in stock price.

On the same token, we’ve seen Tesla miss and the stock price go up the following trading session.

Tesla will hold its Q1 2026 Earnings Call in about 90 minutes at 5:30 p.m. on the East Coast. Remarks will be made by CEO Elon Musk and other executives, who will shed some light on the investor questions that we covered earlier this week.

You can stream it below. Additionally, we will be doing our Live Blog on X and Facebook.

Advertisement

Continue Reading