Connect with us

News

The Tesla Autopilot Excuse: How EV ignorance created the perfect storm for a misinformation nightmare

Credit: Tesla

Published

on

It was only a few hours after the accident and a bold statement was already making its rounds in the mainstream media. Another Tesla has crashed, and this time, it took the lives of two individuals from Texas. Facing inquiries from journalists eager for some clarity as to what happened in the tragic incident, Harris County Pct. 4 Constable Mark Herman shared a surprisingly confident and bold statement: there was no one in the ill-fated Model S’ driver seat when it crashed. 

“They are 100% certain that no one was in the driver seat driving that vehicle at the time of impact. They are positive. And again, the height from the back seat to the front seat, that would be almost impossible, but again our investigators are trained. They handle collisions. Several of our folks are reconstructionists, but they feel very confident just with the positioning of the bodies after the impact that there was no one driving that vehicle,” Herman said, also noting that the electric car’s fire was out of control for four hours. 

This statement, as well as the headlines that followed it, have since been proven false. And today, they stand as a remarkable case study on how misinformation spreads, and how the truth — even if it eventually emerges from legitimate sources — becomes largely ignored. This is the story of a Model S crash, rushed statements, and how general ignorance of electric vehicles could result in a massive misinformation nightmare. 

But to get a complete view of this story, one has to go back to that fateful night on April 17, 2021, when two men, a 59-year-old Tesla owner and his 69-year-old passenger, crashed after traveling just about 550 feet, before departing the road on a curve, driving over a curb, hitting a drainage culvert and a raised manhole, and smashing into a tree. The vehicle was ablaze following its crash.

Advertisement
The location where the accident happened. (Credit: NTSB)

The Accident

As it is with other Tesla crashes, the Model S crash in Texas immediately caught the attention of national media. It did not take long before even foreign outlets were running with the story. It was during this initial wave of media attention that Constable Mark Herman noted that investigators were 100% sure that there was no one driving the car when it crashed. This statement was gold to numerous media outlets, with some like the New York Post posting a tweet noting that the ill-fated Tesla was on Autopilot. It’s pertinent to note that the Constable never mentioned Autopilot, though his statement declaring that there was no one in the driver’s seat seemed like a strong enough link to the driver-assist suite. 

Soon, even organizations such as Consumer Reports joined the fray, graciously demonstrating that Autopilot could indeed be “fooled” into operating without a human in the driver’s seat. Consumer Reports‘ walkthrough was thorough, showing audiences exactly what needs to be done to defeat Autopilot’s safety measures. This stunt caught the eye of both national and international media as well, and by this time, the narrative was set: Teslas can drive themselves without a driver, and Autopilot could kill. It’s a chilling thought, but it is one that seemed to be casually supported by Ford CEO Jim Farley, who shared Consumer Reports‘ Autopilot defeat device walkthrough on his personal Twitter page. 

This does not mean to say the narrative surrounding the fatal Model S crash in Texas was ironclad, however. Just days after the initial crash, Palmer Buck, fire chief for The Woodlands Township Fire Department, told the Houston Chronicle that contrary to some reports in the media, the ill-fated Model S was not ablaze for four hours. The fire chief also stated that firefighters did not call Tesla for help, and he was unaware of any hotlines for tips on how to control a battery fire. 

Opinion: Consumer Reports’ Tesla Autopilot stunt crossed a line in an already-heated EV climate

The First Cracks — And A Persistent Misunderstanding

Interestingly enough, even Constable Herman himself seemed less sure about his information later on, noting in a statement to Reuters that his investigators were “almost 99.9% sure” that there was no one in the driver’s seat of the ill-fated car. This was despite Herman noting that they had executed a search warrant on Tesla to secure data about the tragic incident. Meanwhile, Elon Musk went on Twitter to state that data logs so far showed that the ill-fated vehicle was not on Autopilot when it crashed. 

Advertisement

Tesla’s online community took it upon themselves to make sense of the situation, which seemed to have red flags all over the place. The Constable’s statements seemed premature at best, and reports about the vehicle’s fire had been proven false by the fire chief. Couple this with Elon Musk noting that Autopilot was not involved, and it was no surprise that the crash became a topic for analysis and conversations among Tesla supporters. These efforts, however, were largely dismissed if not mocked, with media outlets such as VICE stating that the behavior of the Tesla sleuths was akin to those who believe in conspiracy theories.

“Rather than waiting for the two different federal authorities investigating the crash to publish their findings, some Tesla owners are engaging in the classic behavior of conspiracy theorists and amateur internet sleuths in an apparent attempt to cast doubt on even the most basic facts surrounding the crash,” the publication noted. 

More cracks about the initial “Autopilot crash” narrative emerged during the company’s Q1 2021 earnings call. Lars Moravy, Tesla’s vice president of vehicle engineering, stated that the company had conducted tests with investigators, and they have determined that Autosteer could not be engaged in the area. He also stated that judging by the distance of the vehicle from the owner’s home to the crash site, the Model S would have only accelerated to 30 mph before covering the entire 550-foot distance using Adaptive Cruise Control. This is undoubtedly a clarification about the incident, but like many things in this story, this was also misunderstood. 

Not long after Tesla’s Q1 2021 earnings call, CBS published a piece titled “At Least One Tesla Autopilot Feature Was Active During Texas Crash That Killed 2.” It’s definitely a catchy headline and one that was sure to draw a decent amount of eyes. There was only one problem: the whole premise of the article was false. To add salt to the wound, Texas Rep. Kevin Brady shared the CBS piece on Twitter, noting that “Despite early claims by (Tesla and Elon Musk), Autopilot WAS engaged in (the) tragic crash in The Woodlands. We need answers.” 

Advertisement

A Grassroots Movement

In a world where misinformation is prevalent from media outlets that may or may not be incentivized to publish reports that are completely accurate, citizen journalism has the potential to become the voice of reason. And in the case of the Tesla Texas crash, this was certainly the case. After conversations with sources, some of whom have opted to remain anonymous, Teslarati could surmise that it was the efforts of regular people, from electric vehicle advocates and space enthusiasts who were inspired by Elon Musk’s SpaceX, that may have ultimately helped get the right information about the incident to the right place. 

Advertisement

Days after the incident, and a few weeks before the release of the National Transportation Safety Board (NTSB) preliminary report, @GoGundam1, a Texas-based SpaceX advocate, felt alarm bells in his head after Constable Herman declared confidently that he was 100% sure there was no one in the driver’s seat of the ill-fated Model S. Having been familiar with Elon Musk’s companies, the SpaceX enthusiast was also knowledgeable about Tesla and its products, which made the Constable’s statements seem disingenuous at best. Annoyed by the noticeably false narrative that was being formed, the space advocate sent out some feelers to test out the waters. 

The story that emerged was quite remarkable. Information gathered by citizen informants suggested that by April 22, Constable Herman’s office was already in possession of video evidence that was in direct contradiction to the narrative that was initially presented to the media. It was a disturbing thought, but informants also suggested that the office of the Constable had intentions to sit on the information for as long as possible. Granted, these events may seem like they came from the plot of a semi-decent movie, but considering the relative silence from the Constable following his statements of a search warrant being submitted to Tesla, it does seem like the motivations for a follow-up report clarifying the incident were not really there. 

Pertinent information about the Tesla Texas crash, no matter how valuable, would be next to useless if it did not catch the attention of the right entities. And thus, with the information gathered, the SpaceX enthusiast decided to reach out to members of the Tesla community for help. It was a challenging task, but eventually, @LordPente, a longtime Tesla advocate, decided to lend a hand. After numerous messages to other members of the Tesla community, the longtime EV advocate appeared to hit a breakthrough by (seemingly) reaching someone at Tesla. The SpaceX enthusiast, for his part, failed to get in touch with Tesla but was able to send a report to the NTSB, tipping off the agency about the additional video evidence in the Constable’s office. 

During Teslarati’s conversation with the informant and the Tesla advocate, both noted that they were not really sure if their information reached the right entities. However, something happened not long after which suggested that it did. 

Advertisement
The remains of the ill-fated Tesla Model S (Credit: NTSB)

The Lie Unravels

On May 10, 2021, the National Transportation Safety Board (NTSB) published its preliminary report about the Tesla Model S’ fatal Texas crash. As per the NTSB’s report, “footage from the owner’s home security camera shows the owner entering the car’s driver’s seat and the passenger entering the front passenger seat.” Apart from this, the NTSB also noted that tests of a similar vehicle at the crash location showed that Autopilot could not be engaged in the area, just as Tesla and the electric vehicle community suggested amidst the initial wave of “Autopilot crash” reports. The investigation is ongoing, of course, but based on what the NTSB has published so far, it appears that Autopilot has been absolved in the incident. 

The findings presented in the NTSB’s report all but confirmed what Elon Musk and Tesla supporters were arguing online. It may be disappointing to media outlets like VICE, but as it turned out, the conspiracy theorist-like behavior exhibited by some Tesla sleuths online turned out to be justified. There really was misinformation being floated around, and if it wasn’t for the efforts of a few individuals, pertinent information about the incident might not have been submitted to Tesla or the NTSB on time. 

Interestingly enough, Harris County Pct. 4 Constable Mark Herman has remained silent for now. Teslarati has attempted to reach out to his office through email but was unsuccessful. The Constable, at least for now, seems yet to issue a correction or retraction of his initial and now-debunked statements about the incident. Individuals such as Texas Rep. Kevin Brady have not admitted to making a mistake either. 

How Misinformation Becomes Truth

Tesla, being a rather unorthodox company led by an equally unorthodox man, tends to fall victim to misinformation — lots and lots of it. The story of the Texas crash is a great example, but it is one drop in a whole bucket full of inaccurate reports about the company. Tesla CEO Elon Musk has seemingly thrown the towel with mainstream media coverage, reportedly abolishing Tesla’s PR department last year. This, of course, has pretty much opened the doors to even more misinformation — and to a point, even disinformation — which, in turn, becomes the general public’s truth. 

For professional insights on how misinformation becomes accepted, Teslarati reached out to Stephen Benning, a Professor of Psychology at the University of Las Vegas. Professor Benning explained that humans tend to have an anchoring bias, in which the first information used to make a judgment influences it. While anchoring bias is typically considered in numerical judgments (like estimates on how much something is worth), it could also play out when people hear the first reports of what happened. This is most notable if the event were memorable, like a fatal Tesla crash. The initial information would likely stick on people’s minds and create an initial framework that sets their beliefs about an event. 

Advertisement

“Because initial reports set people’s prior beliefs, additional information has to weigh against established beliefs. People might have additional biases at play, like the confirmation bias that filters out information that isn’t consistent with a previous set of beliefs. It’s as if people put up filters to help themselves maintain the consistency of their beliefs at the expense of their potential correspondence with reality. The initial crash reports were also likely more vivid than the drier details of the subsequent investigation, so the availability heuristic might make those initial reports more vivid and accessible in people’s memories when they think about the crash – even if they’ve followed the subsequent reports,” he wrote. 

Tesla owner apologizes for staging “brake failure” incident in China

Emma Frances Bloomfield (Ph.D.), currently an Assistant Professor of Communication Studies at the University of Nevada, Las Vegas with an expertise in strategies for combatting misinformation, explained to Teslarati that ultimately, misinformation and disinformation travel very quickly because they tend to be compelling and engaging, all while confirming an audience’s biases. This made the Texas crash a perfect storm of sorts, as it had a compelling event that catered to biases against Tesla and its Autopilot system. Unfortunately, Assistant Professor Bloomfield also highlighted that once misinformation sets in, it takes a ton of effort to overturn. 

“To address misinformation, people can create more complete stories that replace the incorrect one, provide trustworthy authority figures to deliver the message, and not repeat the false information when making the correction. You can also emphasize the importance of accurate information to make the best decisions moving forward and highlight how those changes might benefit the audience/consumer. We also say, ‘correct early and correct often’ to try and get ahead of the temporal advantage misinformation has and to counter the repetition of the false information,” she wrote. 

Advertisement

A Battle That Tesla Doesn’t Need To Lose

If there is something highlighted by Professor Benning and Assistant Professor Bloomfield, it is that misinformation is hard to battle once it’s settled in. And for a lie to settle in, it has to be repeated. The Texas crash demonstrated this. It didn’t start with a lie, but it started with a premature, careless statement that could be easily twisted into one.

The Constable’s certainty that there was no one in the driver’s seat was premature at best, and reports about the incident being an Autopilot crash were also premature then, or a lie at worst. Reports about an uncontrollable blaze burning for four hours were false as well. Yet the narrative was so hammered down and unchallenged that even when the NTSB preliminary report came out, the needle barely moved. 

Elon Musk’s reservations about maintaining a relationship with the media are understandable. Years of inaccurate reports tend to do that to a person. However, Tesla could also adopt a much more assertive anti-misinformation strategy. Tesla China has been doing this as of late, to great results. Anyone following the Tesla China story would know that the company was embroiled in a PR storm that involved alleged reports of “brake failure” incidents surrounding the company’s vehicles. But after an assertive legal campaign from Tesla China, media outlets have issued apologies for misreporting on the company and social media personalities have admitted to making up alleged incidents that painted the company’s vehicles in a negative light. Granted, such strategies may not be as effective in the United States, but something has to be done. What this something is remains up for question. 

Do you have anything to share with the Teslarati Team? We’d love to hear from you, email us at tips@teslarati.com.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

News

Tesla Summon got insanely good in FSD v14.3.2 — Navigation? Not so much

There were two new lines of improvements in the release notes: one addressing Actually Smart Summon (ASS), and another that now allows drivers to choose a reason for an intervention via a small menu during disengagement.

Published

on

(Photo: Hector Perez/YouTube)

Tesla Full Self-Driving v14.3.2 began rolling out to some owners earlier this week, and there are some notable improvements that came with this update.

There were two new lines of improvements in the release notes: one addressing Actually Smart Summon (ASS), and another that now allows drivers to choose a reason for an intervention via a small menu during disengagement.

Overall operation saw a handful of slight improvements, especially with parking performance, which has been the most notable difference with the arrival of FSD v14.3. However, there are still some very notable shortcomings, most notably with region-specific signage and navigation.

Tesla Assisted Smart Summon (ASS) improvements

There are noticeable improvements to ASS operation, which has definitely been inconsistent in terms of performance. Tesla wrote in the release notes for v14.3.2:

Advertisement

“Unified the model between Actually Smart Summon, FSD, and Robotaxi for more capable and reliable behavior.”

As recently as this month, I used Summon with no success. It had pulled around the parking lot I was in incorrectly, leaving the range at which Summon can be operated and losing a signal while moving in the middle of the lot.

This caused me to sprint across the lot to retrieve the vehicle:

Unfortunately, Summon was not dependable or accurate enough to use regularly. It appears Tesla might have bridged the gap needed to make it an effective feature, as two tests in parking lots proved that Summon was more responsive and faster to navigate to the location chosen.

Advertisement

It also did so without hesitation, confidently, and at a comfortable speed. I was able to test it twice at different distances:

Advertisement

I plan to test this more thoroughly and regularly through the next few weeks, and I avoided using it in a congested parking lot initially because I have not had overwhelming success with Summon in the past. I wanted to set a low baseline for it to see if it could simply pull up to the place I pinned in the Tesla app.

It was two for two, which is a big improvement because I don’t think I ever had successful Summon attempts back-to-back. It just seems more confident than ever before.

Advertisement

New Disengagement Categories

This is a really good idea from Tesla, but there are some issues with it. The categories you can select are Critical, Comfort, Preference, and Other.

I think the reasons why people choose to take over would be a better way to prompt drivers, like, “Traveling Too Fast,” “Incorrect Maneuver,” “Navigation Error,” would be more beneficial.

I say this because it seems that how we each categorize things might be different. For example, I shared a video of an intervention because the car had navigated to an exit to a parking lot and put its left blinker on, despite left turns not being allowed there.

I disengaged and chose Critical as the reason; it’s not a comfort issue, it’s not a preference, it’s quite literally an illegal turn, and it’s also dangerous because it cuts across several lanes of traffic and is 180 degrees.

Advertisement

Some said I should not have labeled this as Critical, but that’s the description I best characterized the disengagement as.

Advertisement

Categorizing interventions is a good thing, but it’s kind of hard to determine how to label them correctly.

Inconsistency with Regional Traffic Patterns

Tesla Full Self-Driving is pretty inconsistent with how it handles regional or local traffic patterns and road rules. The most frequent example I like to use is that of the “Except Right Turn” stop sign, which has become a notorious sighting on our social media platforms.

In the initial rollout of v14.3, my Model Y successfully navigated through one of these stop signs with no issues. However, testing at two of these stop signs yesterday proved it is still not sure how to read signs and navigate through them properly.

Off camera, I approached another one of these signs and felt the car coming to a stop, so I nudged it forward with the accelerator pedal pressed.

This helped the car go through the sign without stopping, but I could feel the bucking of the vehicle as the car really wanted to stop.

Musk said on the earnings call earlier this week that unsupervised FSD would probably be available in some regions before others, including a state-to-state basis in the U.S.

Advertisement

“It’s difficult to release this like to everyone everywhere all at once because we do want to make sure that they’re not unique situations in a city that particularly complex intersection or — actually, they tend to be places where people get into accidents a lot because they’re just — perhaps there’s — and like I said, an unsafe intersection or bad road markings or a lot of weather challenges. So I think we would release unsupervised gradually to the customer fleet as we feel like a particular geography is confirmed to be safe.”

This could be one of those examples that Tesla just has to figure out.

Highway Operation

Full Self-Driving is already pretty good at routine roadway navigation, so I don’t have too much to report here.

However, I was happy with FSD’s decision-making at several points, including its choice not to pass a slightly slower car and remain in the right lane as we approached the off-ramp:

Advertisement

Better Maneuvering at Stop Signs

Many FSD users report some strange operations at stop signs, especially four-way intersections where there is a stop sign and a line on the road, and they’re not even with one another.

Advertisement

I experienced this quite frequently and found that FSD would actually double stop: once at the stop sign and again at the line.

This created some interesting scenarios for me and I had many cars honk at me when the second stop would happen. Other vehicles that had waved me on to proceed through the intersection would become frustrated at the second stop.

FSD seems to have worked through this particular maneuver:

FSD should know to go to the more appropriate location (whichever provides better visibility), and proceed when it is the car’s turn to move. The double stop really ruined the flow of traffic at times and generally caused some frustration from other drivers.

Advertisement
Continue Reading

News

Tesla plans to resolve its angriest bunch of owners: here’s how

Since the rollout of the AI4 chip in Tesla vehicles, owners with the last generation self-driving chip, known as Hardware 3, have been persistent in their quest for a solution to their issue: they were told their cars were capable of unsupervised Full Self-Driving. It turns out the cars are not.

Published

on

tesla-asia-model-3
Credit: Tesla Asia/Twitter

Tesla has a plan to make Hardware 3 owners whole after CEO Elon Musk admitted that those with that self-driving chip in their cars will not have access to unsupervised Full Self-Driving.

The company’s strategy is so crazy that it is sort of hard to believe.

Since the rollout of the AI4 chip in Tesla vehicles, owners with the last generation self-driving chip, known as Hardware 3, have been persistent in their quest for a solution to their issue: they were told their cars were capable of unsupervised Full Self-Driving. It turns out the cars are not.

During the Tesla Q1 earnings call on Wednesday, Musk finally clarified what the company’s plans are for Hardware 3 owners, what they will be offered, and what Tesla will have to do internally to prepare for it.

The answer was somewhat mind-boggling.

Musk said:

Advertisement

“Unfortunately, Hardware 3 — I wish it were otherwise, but Hardware 3 simply does not have the capability to achieve unsupervised FSD. We did think at one point it would have that, but relative to Hardware 4, it has only 1/8 of the memory bandwidth of Hardware 4. And memory bandwidth is one of the key elements needed for unsupervised FSD.”

He continued, stating that HW3 owners would have the opportunity to trade their cars in at a discounted rate in order to get the AI4 chip:

“So for customers that have bought FSD, what we’re offering is essentially a trade-in — like a discounted trade-in for cars that have AI4 hardware, and we’ll also be offering the ability to upgrade the car, to replace the computer. And you also need to replace the cameras, unfortunately, to go to Hardware 4.”

Obviously, Tesla has a lot of people to work with and make this whole thing right. Musk was adamant that HW3 would be capable of FSD, and now that the company has finally admitted that it is not, there are some things that could come of this.

Advertisement

There has been open talk about some sort of class action lawsuit against Tesla. The promises that Tesla made previously could be considered a breach of contract or even false advertising, and that’s according to Grok, Musk’s own AI program.

Musk went on to say that Tesla would likely have to establish new microfactories to effectively and efficiently replace HW3 computers and cameras:

…So to do this efficiently, we’re going to have to set up, like kind of micro factories or small factories in major metropolitan areas in order to do it efficiently. Because if it’s done just at the service center, it is extremely slow to do so and inefficient. So we basically need like many production lines to make the change.”

This is going to be an extremely costly process, especially if Tesla has to buy real estate, properties, and equipment to complete this work. Additionally, there was no wording on pricing, but Musk never said it would be free. It will likely come with some kind of price tag, and HW3 owners, after being left hanging for so long, will have something to say about that.

Advertisement
Continue Reading

Elon Musk

SpaceX just got pulled into the biggest Weapons Program in U.S. history

SpaceX joins the Golden Dome software group, deepening its role in America’s most expensive defense program.

Published

on

By

US Golden Dome space defense system (Concept render by Grok)

SpaceX has joined a nine-company group developing the core operating software for the Golden Dome, America’s next-generation missile defense system. According to a Bloomberg report, SpaceX is focused on integrating satellite communications for military operations and is working alongside eight other defense and artificial intelligence companies, including Anduril Industries, Palantir Technologies, and Aalyria Technologies, to build software connecting missile defense capabilities.

The Golden Dome concept dates back to President Trump’s 2024 campaign, and on January 27, 2025, he signed an executive order directing the U.S. Armed Forces to construct the system before the end of his term. The system is planned to employ a constellation of thousands of satellites equipped with interceptors, with data centers in space providing automated control through an AI network.

FCC accepts SpaceX filing for 1 million orbital data center plan

Space Force Gen. Michael Guetlein, director of the Golden Dome initiative, has described the software layer as a “glue layer” that would enable officers to manage and control radars, sensors, and missile batteries across services. The consortium is aiming to test the platform this summer.

Advertisement

Trump selected a design in May 2025 with a $175 billion price tag, expected to be operational by the end of his term in 2029, though the Congressional Budget Office projected the cost could reach $831 billion over two decades.

The Golden Dome role is only the latest in a string of military wins for SpaceX. As Teslarati reported, the U.S. Space Force awarded SpaceX a $178.5 million task order on April 1, 2026 to launch missile tracking satellites for the Space Development Agency, covering two Falcon 9 launches beginning in Q3 2027. That came on top of more than $22 billion in government contracts held by SpaceX as of 2024, per CEO Gwynne Shotwell, spanning NASA resupply missions, classified intelligence satellites through its Starshield program, and military broadband.

The accumulation of defense contracts, now including a seat at the table on the most expensive weapons program in U.S. history, positions SpaceX as the dominant infrastructure provider for American national security in space. With a SpaceX IPO still on the horizon, each new contract adds weight to what is already one of the most consequential companies in aerospace history, raising real questions about how much of America’s defense architecture will depend on a single private operator before it ever trades publicly.

Advertisement
Continue Reading