News
SpaceX set to launch massive satellite on July 2nd: 3 flights in 9 days
SpaceX’s Next Launch is Still Nearly on Time in Spite of BulgariaSat-1 Delays
As first reported earlier this morning by James Dean of Florida Today and now officially confirmed by the launch customer Intelsat, SpaceX’s launch of Intelsat 35e has been scheduled for July 2nd at 4:36 p.m. PST.
A several day delay of the launch of BulgariaSat-1 from Monday to Friday of last week was logically assumed to mean that the launch of Intelsat 35e, previously scheduled for July 1st, would be delayed at least several days to allow for the necessary pad checks and repairs that occur after launches. In 2017, this pad flow has generally taken at least a full week, with a static fire occurring once the pad is ready, and a launch several days after that. Two weeks has so far been a relatively consistent minimum between launches from the same pad.
A launch from LC-39A on July 2nd would give SpaceX at most nine days from the launch of BulgariaSat-1 to ready the pad once more. Further, Intelsat 35e has a static fire scheduled as early as Thursday this week, six days after the pad’s previous successful launch. I previously wrote about SpaceX potentially conducting three separate missions within the course of two weeks and declared that such an accomplishment would be a massive accomplishment and proof of concept for some of SpaceX’s more lofty goals. Now it would appear that there is a possibility that SpaceX could launch three separate missions in as few as nine days.
Nine days is of course quite close to being a single week, and successfully pulling off what is now officially scheduled would lend unassailable credence to a previous SpaceX goal of regular, weekly cadence by 2019. In fact, three launches in nine days from two separate pads almost makes regular weekly launches from two separate pads appear imminently in reach for the company, possibly even earlier than 2019.
Intelsat 35e will become the largest communications satellite SpaceX has ever sent to orbit, weighing in at ~6000 kilograms. Designed to last at least 15 years in geostationary orbit, it is expected that SpaceX will attempt to place the satellite into a higher energy geostationary transfer orbit in order to reduce the amount of time it takes the commsat to reach its final planned orbit. This translates to an expendable Falcon 9 Full Thrust that will pushed close to its payload and orbit limits. While it is now somewhat sad to see a Falcon 9 first stage unable to attempt recovery, this will still be a thoroughly exciting launch, especially considering the impressive mass of the satellite.

Another successful recovery for 1029 on June 23, 2017. Note the dramatic lean and differing angles of the legs on the left, courtesy of a very hard landing. (SpaceX)
SpaceX’s constant iteration of Falcon 9 vehicles meant that Intelsat 35e did not have to wait for Falcon Heavy, as the current default version of the Falcon 9 (v1.2) has begun to overlap the original performance estimates for the first Falcon Heavy concept. Of note, the vehicles that launched last weekend have approximately double the lifting capacity of the original Falcon 9, which last flew in 2013.
The static fire for the launch of Iridium 35e is currently scheduled for this Thursday. Check back at Teslarati for confirmation of that test as we find ourselves once more just a handful of days away from yet another SpaceX launch.
News
NVIDIA Director of Robotics: Tesla FSD v14 is the first AI to pass the “Physical Turing Test”
After testing FSD v14, Fan stated that his experience with FSD felt magical at first, but it soon started to feel like a routine.
NVIDIA Director of Robotics Jim Fan has praised Tesla’s Full Self-Driving (Supervised) v14 as the first AI to pass what he described as a “Physical Turing Test.”
After testing FSD v14, Fan stated that his experience with FSD felt magical at first, but it soon started to feel like a routine. And just like smartphones today, removing it now would “actively hurt.”
Jim Fan’s hands-on FSD v14 impressions
Fan, a leading researcher in embodied AI who is currently solving Physical AI at NVIDIA and spearheading the company’s Project GR00T initiative, noted that he actually was late to the Tesla game. He was, however, one of the first to try out FSD v14.
“I was very late to own a Tesla but among the earliest to try out FSD v14. It’s perhaps the first time I experience an AI that passes the Physical Turing Test: after a long day at work, you press a button, lay back, and couldn’t tell if a neural net or a human drove you home,” Fan wrote in a post on X.
Fan added: “Despite knowing exactly how robot learning works, I still find it magical watching the steering wheel turn by itself. First it feels surreal, next it becomes routine. Then, like the smartphone, taking it away actively hurts. This is how humanity gets rewired and glued to god-like technologies.”
The Physical Turing Test
The original Turing Test was conceived by Alan Turing in 1950, and it was aimed at determining if a machine could exhibit behavior that is equivalent to or indistinguishable from a human. By focusing on text-based conversations, the original Turing Test set a high bar for natural language processing and machine learning.
This test has been passed by today’s large language models. However, the capability to converse in a humanlike manner is a completely different challenge from performing real-world problem-solving or physical interactions. Thus, Fan introduced the Physical Turing Test, which challenges AI systems to demonstrate intelligence through physical actions.
Based on Fan’s comments, Tesla has demonstrated these intelligent physical actions with FSD v14. Elon Musk agreed with the NVIDIA executive, stating in a post on X that with FSD v14, “you can sense the sentience maturing.” Musk also praised Tesla AI, calling it the best “real-world AI” today.
News
Tesla AI team burns the Christmas midnight oil by releasing FSD v14.2.2.1
The update was released just a day after FSD v14.2.2 started rolling out to customers.
Tesla is burning the midnight oil this Christmas, with the Tesla AI team quietly rolling out Full Self-Driving (Supervised) v14.2.2.1 just a day after FSD v14.2.2 started rolling out to customers.
Tesla owner shares insights on FSD v14.2.2.1
Longtime Tesla owner and FSD tester @BLKMDL3 shared some insights following several drives with FSD v14.2.2.1 in rainy Los Angeles conditions with standing water and faded lane lines. He reported zero steering hesitation or stutter, confident lane changes, and maneuvers executed with precision that evoked the performance of Tesla’s driverless Robotaxis in Austin.
Parking performance impressed, with most spots nailed perfectly, including tight, sharp turns, in single attempts without shaky steering. One minor offset happened only due to another vehicle that was parked over the line, which FSD accommodated by a few extra inches. In rain that typically erases road markings, FSD visualized lanes and turn lines better than humans, positioning itself flawlessly when entering new streets as well.
“Took it up a dark, wet, and twisty canyon road up and down the hill tonight and it went very well as to be expected. Stayed centered in the lane, kept speed well and gives a confidence inspiring steering feel where it handles these curvy roads better than the majority of human drivers,” the Tesla owner wrote in a post on X.
Tesla’s FSD v14.2.2 update
Just a day before FSD v14.2.2.1’s release, Tesla rolled out FSD v14.2.2, which was focused on smoother real-world performance, better obstacle awareness, and precise end-of-trip routing. According to the update’s release notes, FSD v14.2.2 upgrades the vision encoder neural network with higher resolution features, enhancing detection of emergency vehicles, road obstacles, and human gestures.
New Arrival Options also allowed users to select preferred drop-off styles, such as Parking Lot, Street, Driveway, Parking Garage, or Curbside, with the navigation pin automatically adjusting to the ideal spot. Other refinements include pulling over for emergency vehicles, real-time vision-based detours for blocked roads, improved gate and debris handling, and Speed Profiles for customized driving styles.
Elon Musk
Elon Musk’s Grok records lowest hallucination rate in AI reliability study
Grok achieved an 8% hallucination rate, 4.5 customer rating, 3.5 consistency, and 0.07% downtime, resulting in an overall risk score of just 6.
A December 2025 study by casino games aggregator Relum has identified Elon Musk’s Grok as one of the most reliable AI chatbots for workplace use, boasting the lowest hallucination rate at just 8% among the 10 major models tested.
In comparison, market leader ChatGPT registered one of the highest hallucination rates at 35%, just behind Google’s Gemini, which registered a high hallucination rate of 38%. The findings highlight Grok’s factual prowess despite the AI model’s lower market visibility.
Grok tops hallucination metric
The research evaluated chatbots on hallucination rate, customer ratings, response consistency, and downtime rate. The chatbots were then assigned a reliability risk score from 0 to 99, with higher scores indicating bigger problems.
Grok achieved an 8% hallucination rate, 4.5 customer rating, 3.5 consistency, and 0.07% downtime, resulting in an overall risk score of just 6. DeepSeek followed closely with 14% hallucinations and zero downtime for a stellar risk score of 4. ChatGPT’s high hallucination and downtime rates gave it the top risk score of 99, followed by Claude and Meta AI, which earned reliability risk scores of 75 and 70, respectively.

Why low hallucinations matter
Relum Chief Product Officer Razvan-Lucian Haiduc shared his thoughts about the study’s findings. “About 65% of US companies now use AI chatbots in their daily work, and nearly 45% of employees admit they’ve shared sensitive company information with these tools. These numbers show well how important chatbots have become in everyday work.
“Dependence on AI tools will likely increase even more, so companies should choose their chatbots based on how reliable and fit they are for their specific business needs. A chatbot that everyone uses isn’t necessarily the one that works best for your industry or gives accurate answers for your tasks.”
In a way, the study reveals a notable gap between AI chatbots’ popularity and performance, with Grok’s low hallucination rate positioning it as a strong choice for accuracy-critical applications. This was despite the fact that Grok is not used as much by users, at least compared to more mainstream AI applications such as ChatGPT.