Connect with us

News

SpaceX CEO Elon Musk says major Starship engine bug is fixed as Raptor testing continues

Starhopper awaits its first truly flightworthy Raptor as CEO Elon Musk says SpaceX may have solved the technical bug delaying hop tests. (NASASpaceflight - bocachicagal, SpaceX)

Published

on

SpaceX CEO Elon Musk has revealed the latest official photo of the company’s Raptor engine in action and indicated that a major technical issue with vibration appears to have been solved, hopefully paving the way for Starhopper’s first untethered flights.

Partly due to Musk’s own involvement in the program, SpaceX’s propulsion development team have struggled to get any single Raptor engine to survive more than 50-100 seconds of cumulative test fires. According to information from sources familiar with the program, Musk has enforced an exceptionally hardware-rich development program for the first full-scale Raptor engines to such an extent that several have been destroyed so completely that they could barely be used to inform design optimization work. Although likely more strenuous and inefficient than it needed to be, the exceptionally hardware-rich test program appears to have begun to show fruit, with the sixth engine built (SN06) passing its first tests without exhibiting signs of a problem that has plagued most of the five Raptors that came before it.

Resonance: not even once

In his tweet, Musk cryptically noted that a “600 Hz Raptor vibration problem” appears to have been fixed as of SN06’s first few static fire tests since arriving in McGregor, Texas. More likely than not, the self-taught SpaceX executive is referring to the hell that is mechanical resonance in complex machines and structures. Shown below, the Tacoma Narrows Bridge’s 1940 collapse – quite possibly the single most famous civil engineering failure of all time – is an iconic example of the unintuitive power of resonance in complex systems.

An excellent overview of the challenges and fairly young history of mechanical resonance in modern engineering.

When it was inaugurated, the first Tacoma Narrows Bridge was one of the longest suspension bridges ever built and implemented new techniques and technologies that had never been tried at such a large scale. As Grady (Practical Engineer) aptly notes, mechanical resonance – in this case, triggered by consistent winds running through the Puget Sound – simply wasn’t something that period engineers knew they had to worry about. When rapidly pushing the envelope of engineering and construction, the chances of discovering entirely novel failure modes also increases – it’s simply one of the costs of extreme innovation.

The first finalized Raptor engine (SN01) completed a successful static fire debut on the evening of February 3rd. (SpaceX)
Just five days after its first ignition, SpaceX successfully tested Raptor SN01 at more than twice the thrust of Merlin 1D. (SpaceX)
The latest official photo of Raptor testing in McGregor. This engine is likely SN06, the sixth Raptor produced in 2019. (SpaceX/Elon Musk)

Luckily for SpaceX, the company doesn’t have to clash with the immense challenge of testing something as large, complex, and expensive as a suspension bridge. Raptor, Starship, and Super Heavy need not necessarily be perfect on SpaceX’s first try, whereas civil bridges must essentially be flawless on the first try, despite being one of a kind. This is why SpaceX has been chewing through an average of one Raptor engine per month since February 2019 – by testing engines to destruction and aggressively comparing engineering expectations with observed behavior and post-test hardware conditions, rapid progress can (theoretically) be made.

Instead of spending another year or more analyzing models and testing subscale engines and components, SpaceX dove into integrated testing of a sort of minimum-viable-product Raptor design, accepting that the path to a flightworthy, finalized design would likely be paved with one or several dozen destroyed engines. According to Musk, the biggest pressing design deficiency involved a mode of mechanical resonance that may or may not have been predicted over the course of the design process. Dealing with unprecedented conditions, it’s not particularly surprising that some sort of new resonance mode was discovered in Raptor.

For the time being, SpaceX continues to work around the clock to build its first two orbital Starship prototypes (one in Texas, one in Florida), while also outfitting Starhopper and completing any possible engine-less tests in anticipation of the first flightworthy Raptor’s arrival. If Musk’s early analysis proves correct and Raptor SN06 makes it through lengthier static fire tests unscathed over the next week or so, the engine could potentially be delivered to Boca Chica as early as mid-July.

Check out Teslarati’s Marketplace! We offer Tesla accessories, including for the Tesla Cybertruck and Tesla Model 3.

Advertisement
-->

Eric Ralph is Teslarati's senior spaceflight reporter and has been covering the industry in some capacity for almost half a decade, largely spurred in 2016 by a trip to Mexico to watch Elon Musk reveal SpaceX's plans for Mars in person. Aside from spreading interest and excitement about spaceflight far and wide, his primary goal is to cover humanity's ongoing efforts to expand beyond Earth to the Moon, Mars, and elsewhere.

Advertisement
Comments

News

Nvidia CEO Jensen Huang explains difference between Tesla FSD and Alpamayo

“Tesla’s FSD stack is completely world-class,” the Nvidia CEO said.

Published

on

Credit: Grok Imagine

NVIDIA CEO Jensen Huang has offered high praise for Tesla’s Full Self-Driving (FSD) system during a Q&A at CES 2026, calling it “world-class” and “state-of-the-art” in design, training, and performance. 

More importantly, he also shared some insights about the key differences between FSD and Nvidia’s recently announced Alpamayo system. 

Jensen Huang’s praise for Tesla FSD

Nvidia made headlines at CES following its announcement of Alpamayo, which uses artificial intelligence to accelerate the development of autonomous driving solutions. Due to its focus on AI, many started speculating that Alpamayo would be a direct rival to FSD. This was somewhat addressed by Elon Musk, who predicted that “they will find that it’s easy to get to 99% and then super hard to solve the long tail of the distribution.”

During his Q&A, Nvidia CEO Jensen Huang was asked about the difference between FSD and Alpamayo. His response was extensive:

“Tesla’s FSD stack is completely world-class. They’ve been working on it for quite some time. It’s world-class not only in the number of miles it’s accumulated, but in the way it’s designed, the way they do training, data collection, curation, synthetic data generation, and all of their simulation technologies. 

Advertisement
-->

“Of course, the latest generation is end-to-end Full Self-Driving—meaning it’s one large model trained end to end. And so… Elon’s AD system is, in every way, 100% state-of-the-art. I’m really quite impressed by the technology. I have it, and I drive it in our house, and it works incredibly well,” the Nvidia CEO said. 

Nvidia’s platform approach vs Tesla’s integration

Huang also stated that Nvidia’s Alpamayo system was built around a fundamentally different philosophy from Tesla’s. Rather than developing self-driving cars itself, Nvidia supplies the full autonomous technology stack for other companies to use.

“Nvidia doesn’t build self-driving cars. We build the full stack so others can,” Huang said, explaining that Nvidia provides separate systems for training, simulation, and in-vehicle computing, all supported by shared software.

He added that customers can adopt as much or as little of the platform as they need, noting that Nvidia works across the industry, including with Tesla on training systems and companies like Waymo, XPeng, and Nuro on vehicle computing.

“So our system is really quite pervasive because we’re a technology platform provider. That’s the primary difference. There’s no question in our mind that, of the billion cars on the road today, in another 10 years’ time, hundreds of millions of them will have great autonomous capability. This is likely one of the largest, fastest-growing technology industries over the next decade.”

Advertisement
-->

He also emphasized Nvidia’s open approach, saying the company open-sources its models and helps partners train their own systems. “We’re not a self-driving car company. We’re enabling the autonomous industry,” Huang said.

Continue Reading

Elon Musk

Elon Musk confirms xAI’s purchase of five 380 MW natural gas turbines

The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.

Published

on

Credit: xAI/X

xAI, Elon Musk’s artificial intelligence startup, has purchased five additional 380 MW natural gas turbines from South Korea’s Doosan Enerbility to power its growing supercomputer clusters. 

The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.

xAI’s turbine deal details

News of xAI’s new turbines was shared on social media platform X, with user @SemiAnalysis_ stating that the turbines were produced by South Korea’s Doosan Enerbility. As noted in an Asian Business Daily report, Doosan Enerbility announced last October that it signed a contract to supply two 380 MW gas turbines for a major U.S. tech company. Doosan later noted in December that it secured an order for three more 380 MW gas turbines.

As per the X user, the gas turbines would power an additional 600,000+ GB200 NVL72 equivalent size cluster. This should make xAI’s facilities among the largest in the world. In a reply, Elon Musk confirmed that xAI did purchase the turbines. “True,” Musk wrote in a post on X. 

xAI’s ambitions 

Recent reports have indicated that xAI closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development. The funding, as per the AI startup, “will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products.”

Advertisement
-->

The company also teased the rollout of its upcoming frontier AI model. “Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote in a post on its website. 

Continue Reading

Elon Musk

Elon Musk’s xAI closes upsized $20B Series E funding round

xAI announced the investment round in a post on its official website. 

Published

on

xAI-supercomputer-memphis-environment-pushback
Credit: xAI

xAI has closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development. 

xAI announced the investment round in a post on its official website. 

A $20 billion Series E round

As noted by the artificial intelligence startup in its post, the Series E funding round attracted a diverse group of investors, including Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX, and Baron Capital Group, among others. 

Strategic partners NVIDIA and Cisco Investments also continued support for building the world’s largest GPU clusters.

As xAI stated, “This financing will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products reaching billions of users, and fuel groundbreaking research advancing xAI’s core mission: Understanding the Universe.”

Advertisement
-->

xAI’s core mission

Th Series E funding builds on xAI’s previous rounds, powering Grok advancements and massive compute expansions like the Memphis supercluster. The upsized demand reflects growing recognition of xAI’s potential in frontier AI.

xAI also highlighted several of its breakthroughs in 2025, from the buildout of Colossus I and II, which ended with over 1 million H100 GPU equivalents, and the rollout of the Grok 4 Series, Grok Voice, and Grok Imagine, among others. The company also confirmed that work is already underway to train the flagship large language model’s next iteration, Grok 5. 

“Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote. 

Continue Reading