Connect with us

News

SpaceX customer reaffirms third Falcon Heavy mission’s Q2 2019 launch target

Falcon Heavy prepares for its inaugural February 2018 launch. (SpaceX)

Published

on

Taiwan’s National Space Organization (NSO) has reaffirmed a Q2 2019 launch target for SpaceX’s third-ever Falcon Heavy mission, a US Air Force-sponsored test launch opportunity known as Space Test Program 2 (STP-2).

Set to host approximately two dozen customer spacecraft, one of the largest and most monetarily significant copassengers riding on STP-2 is Formosat-7, a six-satellite Earth sensing constellation built with the cooperation of Taiwan’s NSO and the United States’ NOAA (National Oceanic and Atmospheric Administration) for around $105M. If successfully launched, Formosat-7 will dramatically expand Taiwan’s domestic Earth observation and weather forecasting capabilities, important for a nation at high risk of typhoons and flooding rains.

Although Taiwan officials were unable to offer a target more specific than Q2 2019 (April to June), it’s understood by way of NASA comments and sources inside SpaceX that STP-2’s tentative launch target currently stands in April. For a number of reasons, chances are high that that ambitious launch target will slip into May or June. Notably, the simple fact that Falcon Heavy’s next two launches (Arabsat 6A and STP-2) are scheduled within just a few months of each other almost singlehandedly wipes out any possibility that both Heavy launches will feature all-new side and center boosters, strongly implying that whichever mission flies second will be launching on three flight-proven boosters.

Advertisement
-->
Falcon Heavy’s first static fire, Feb. 2018. (SpaceX)

To further ramp up the difficulty (and improbability), those three flight-proven Block 5 boosters would have to launch as an integrated Falcon Heavy, safely land (two by landing zone, one by drone ship), be transported to SpaceX facilities, and finally be refurbished and reintegrated for their second launch in no more than 30 to 120 days from start to finish. SpaceX’s record for Falcon 9 booster turnaround (the time between two launches) currently stands at 72 days for Block 4 hardware and 74 days for Block 5, meaning that the company could effectively need to simultaneously break its booster turnaround record three times  in order to preserve a number of possible launch dates for both missions.

If it turns out the USAF is actually unwilling to fly its first Falcon Heavy mission on all flight-proven boosters (a strong possibility) or that that has never been the plan, STP-2’s claimed Q2 2019 target would likely have to slip several months into 2019. This would afford SpaceX more time and resources to build an extra three new Falcon Heavy boosters (two sides, one center), each of which requires a bare minimum of several weeks of dedicated production time and months of lead time (at least for the center core), all while preventing or significantly slowing the completed production of other new Falcon boosters.

The exact state of SpaceX’s Falcon 9 and Heavy production is currently unknown, with indications that the company might be building or have already finished core number B1055 or higher, but it’s safe to say that there is not exactly a lot of slack in the production lines in the first half of 2019. Most important, SpaceX likely needs to begin production of the human-rated Falcon 9 boosters that will ultimately launch the company’s first two crewed Crew Dragons as early as June and August, respectively.

Advertisement
-->

 

If the first Falcon 9 set to launch an uncrewed Crew Dragon (B1051) is anything to go off of, each human-rated Falcon 9 is put through an exceptionally time-consuming and strenuous range of tests to satisfy NASA’s requirements, requiring a considerable amount of extra resources (infrastructure, staff, time) to be produced and readied for launch. B1051 likely spent 3+ months in McGregor, Texas performing checks and one or several static fire tests, whereas a more normal Falcon booster typically spends no more than 3-6 weeks at SpaceX’s test facilities before shipping to its launch pad.

Ultimately, time will tell which hurdles the company’s executives (and hopefully engineers) have selected for its next two Falcon Heavy launches: an extraordinary feat of Falcon reusability or a Tesla-reminiscent period of Falcon production hell?


For prompt updates, on-the-ground perspectives, and unique glimpses of SpaceX’s rocket recovery fleet check out our brand new LaunchPad and LandingZone newsletters!

Advertisement
-->

Eric Ralph is Teslarati's senior spaceflight reporter and has been covering the industry in some capacity for almost half a decade, largely spurred in 2016 by a trip to Mexico to watch Elon Musk reveal SpaceX's plans for Mars in person. Aside from spreading interest and excitement about spaceflight far and wide, his primary goal is to cover humanity's ongoing efforts to expand beyond Earth to the Moon, Mars, and elsewhere.

Advertisement
Comments

News

NVIDIA Director of Robotics: Tesla FSD v14 is the first AI to pass the “Physical Turing Test”

After testing FSD v14, Fan stated that his experience with FSD felt magical at first, but it soon started to feel like a routine.

Published

on

Credit: Grok Imagine

NVIDIA Director of Robotics Jim Fan has praised Tesla’s Full Self-Driving (Supervised) v14 as the first AI to pass what he described as a “Physical Turing Test.”

After testing FSD v14, Fan stated that his experience with FSD felt magical at first, but it soon started to feel like a routine. And just like smartphones today, removing it now would “actively hurt.”

Jim Fan’s hands-on FSD v14 impressions

Fan, a leading researcher in embodied AI who is currently solving Physical AI at NVIDIA and spearheading the company’s Project GR00T initiative, noted that he actually was late to the Tesla game. He was, however, one of the first to try out FSD v14

“I was very late to own a Tesla but among the earliest to try out FSD v14. It’s perhaps the first time I experience an AI that passes the Physical Turing Test: after a long day at work, you press a button, lay back, and couldn’t tell if a neural net or a human drove you home,” Fan wrote in a post on X. 

Fan added: “Despite knowing exactly how robot learning works, I still find it magical watching the steering wheel turn by itself. First it feels surreal, next it becomes routine. Then, like the smartphone, taking it away actively hurts. This is how humanity gets rewired and glued to god-like technologies.”

Advertisement
-->

The Physical Turing Test

The original Turing Test was conceived by Alan Turing in 1950, and it was aimed at determining if a machine could exhibit behavior that is equivalent to or indistinguishable from a human. By focusing on text-based conversations, the original Turing Test set a high bar for natural language processing and machine learning. 

This test has been passed by today’s large language models. However, the capability to converse in a humanlike manner is a completely different challenge from performing real-world problem-solving or physical interactions. Thus, Fan introduced the Physical Turing Test, which challenges AI systems to demonstrate intelligence through physical actions.

Based on Fan’s comments, Tesla has demonstrated these intelligent physical actions with FSD v14. Elon Musk agreed with the NVIDIA executive, stating in a post on X that with FSD v14, “you can sense the sentience maturing.” Musk also praised Tesla AI, calling it the best “real-world AI” today.

Continue Reading

News

Tesla AI team burns the Christmas midnight oil by releasing FSD v14.2.2.1

The update was released just a day after FSD v14.2.2 started rolling out to customers. 

Published

on

Credit: Grok

Tesla is burning the midnight oil this Christmas, with the Tesla AI team quietly rolling out Full Self-Driving (Supervised) v14.2.2.1 just a day after FSD v14.2.2 started rolling out to customers. 

Tesla owner shares insights on FSD v14.2.2.1

Longtime Tesla owner and FSD tester @BLKMDL3 shared some insights following several drives with FSD v14.2.2.1 in rainy Los Angeles conditions with standing water and faded lane lines. He reported zero steering hesitation or stutter, confident lane changes, and maneuvers executed with precision that evoked the performance of Tesla’s driverless Robotaxis in Austin.

Parking performance impressed, with most spots nailed perfectly, including tight, sharp turns, in single attempts without shaky steering. One minor offset happened only due to another vehicle that was parked over the line, which FSD accommodated by a few extra inches. In rain that typically erases road markings, FSD visualized lanes and turn lines better than humans, positioning itself flawlessly when entering new streets as well.

“Took it up a dark, wet, and twisty canyon road up and down the hill tonight and it went very well as to be expected. Stayed centered in the lane, kept speed well and gives a confidence inspiring steering feel where it handles these curvy roads better than the majority of human drivers,” the Tesla owner wrote in a post on X.

Tesla’s FSD v14.2.2 update

Just a day before FSD v14.2.2.1’s release, Tesla rolled out FSD v14.2.2, which was focused on smoother real-world performance, better obstacle awareness, and precise end-of-trip routing. According to the update’s release notes, FSD v14.2.2 upgrades the vision encoder neural network with higher resolution features, enhancing detection of emergency vehicles, road obstacles, and human gestures.

Advertisement
-->

New Arrival Options also allowed users to select preferred drop-off styles, such as Parking Lot, Street, Driveway, Parking Garage, or Curbside, with the navigation pin automatically adjusting to the ideal spot. Other refinements include pulling over for emergency vehicles, real-time vision-based detours for blocked roads, improved gate and debris handling, and Speed Profiles for customized driving styles.

Continue Reading

Elon Musk

Elon Musk’s Grok records lowest hallucination rate in AI reliability study

Grok achieved an 8% hallucination rate, 4.5 customer rating, 3.5 consistency, and 0.07% downtime, resulting in an overall risk score of just 6.

Published

on

UK Government, CC BY 2.0 , via Wikimedia Commons

A December 2025 study by casino games aggregator Relum has identified Elon Musk’s Grok as one of the most reliable AI chatbots for workplace use, boasting the lowest hallucination rate at just 8% among the 10 major models tested. 

In comparison, market leader ChatGPT registered one of the highest hallucination rates at 35%, just behind Google’s Gemini, which registered a high hallucination rate of 38%. The findings highlight Grok’s factual prowess despite the AI model’s lower market visibility.

Grok tops hallucination metric

The research evaluated chatbots on hallucination rate, customer ratings, response consistency, and downtime rate. The chatbots were then assigned a reliability risk score from 0 to 99, with higher scores indicating bigger problems.

Grok achieved an 8% hallucination rate, 4.5 customer rating, 3.5 consistency, and 0.07% downtime, resulting in an overall risk score of just 6. DeepSeek followed closely with 14% hallucinations and zero downtime for a stellar risk score of 4. ChatGPT’s high hallucination and downtime rates gave it the top risk score of 99, followed by Claude and Meta AI, which earned reliability risk scores of 75 and 70, respectively. 

Why low hallucinations matter

Relum Chief Product Officer Razvan-Lucian Haiduc shared his thoughts about the study’s findings. “About 65% of US companies now use AI chatbots in their daily work, and nearly 45% of employees admit they’ve shared sensitive company information with these tools. These numbers show well how important chatbots have become in everyday work. 

“Dependence on AI tools will likely increase even more, so companies should choose their chatbots based on how reliable and fit they are for their specific business needs. A chatbot that everyone uses isn’t necessarily the one that works best for your industry or gives accurate answers for your tasks.”

Advertisement
-->

In a way, the study reveals a notable gap between AI chatbots’ popularity and performance, with Grok’s low hallucination rate positioning it as a strong choice for accuracy-critical applications. This was despite the fact that Grok is not used as much by users, at least compared to more mainstream AI applications such as ChatGPT. 

Continue Reading