News
SpaceX Starship’s Raptor engine test facilities are about to get a big upgrade, says Elon Musk
According to CEO Elon Musk, SpaceX’s Starship and Super Heavy rockets are about to get a new test stand that will enable additional and more useful static fire tests of their Raptor engines.
These modifications could reportedly lead to a simplified engine design and will generally expand SpaceX’s ability to rapidly acceptance-test a huge number of Raptors – a necessity given that each Starship/Super Heavy pair will need up to 43 engines.
Musk’s additional insight came by way of a tweet response to an article published today on NASASpaceflight.com, discussing SpaceX’s recently-unearthed plans to reactivate a test stand that hasn’t seen use in almost half a decade. Known as the tripod stand, the large concrete structure was originally built in the 1990s by Beal Aerospace, a now-defunct spaceflight startup, and came under SpaceX ownership when the company bought the McGregor, Texas facilities in 2003.
SpaceX repurposed the stand to static fire Falcon 9 boosters for a number of years, eventually replacing it with a ground-level installation in 2015 that has since been used to test more than 60 Falcon 9 (and Heavy) boosters. It’s not a huge surprise that SpaceX decided to make the change, given that the tripod stand necessarily placed Falcon boosters several hundred feet off the ground, making what was already a challenge even more arduous (and dangerous) for workers.
NASASpaceflight.com also notes that the stand produced far more noise pollution, encouraging SpaceX to move the replacement stand partially underground.

After four years of inactivity, NASASpaceflight.com photos show that SpaceX is well into the process of refurbishing McGregor’s tripod stand. This time, Musk says it will be modified to support vertical Raptor engine testing, likely requiring a new custom mount and new liquid methane and oxygen propellant farms.
By far the most interesting detail to come out of this development is Musk’s indication that moving Raptor static fires to a vertical stand could actually allow SpaceX to simplify the engine’s design by creating more flight-like test conditions (and thus better data). At the moment, all Raptor acceptance testing is done on a separate test stand located elsewhere at SpaceX’s McGregor facilities. Those stands are horizontal, an engineering decision likely motivated by their relatively cheap and fast construction thanks to sidestepping the need for large, water-cooled thrust diverters.

SpaceX does all of its Merlin Vacuum, Merlin 1D, Falcon 9 booster, and upper stage static fire testing on vertical stands at its McGregor facilities, with Raptor’s horizontal stands being the only exception to the rule. As such, it was likely just a matter of time before SpaceX replaced the horizontal Raptor facilities with vertical stands. Given that SpaceX plans to modify an entirely separate stand to support vertical testing, it’s likely that the company will modify the existing stands to support vertical testing as soon as the tripod stand is up and running.

For Falcon 9 and Heavy, SpaceX has relied on a total of five main engine/vehicle test stands: two for Merlin 1D, one for MVac, one for boosters, and one for upper stages. SpaceX builds engines and rockets in Hawthorne, tests every engine separately in Texas, returns them to Hawthorne, installs them on their respective booster/upper stage, and tests those stages in McGregor before they are shipped to their launch site.
Although that sounds undeniably arduous, the four stands pictured above (plus the F9 booster stand further up) have managed to support the entirety of SpaceX’s 82 launches. A new upper stage test stand is being built, but it has yet to be completed and is only necessary because Falcon 9 upper stages are expendable. According to SpaceX planning documents, Starship and Super Heavy will only perform static fire testing at the launch site. As such, something like the cluster of four Merlin stands above could very likely support the production and testing of 100-200+ Raptor engines annually, enough to build numerous boosters and ships.
SpaceX moves fast, so stay tuned for updates as work continues on the tripod stand and paves the way for even more significant changes at SpaceX’s McGregor, Texas test facilities.
Check out Teslarati’s Marketplace! We offer Tesla accessories, including for the Tesla Cybertruck and Tesla Model 3.
Elon Musk
Tesla confirms that work on Dojo 3 has officially resumed
“Now that the AI5 chip design is in good shape, Tesla will restart work on Dojo 3,” Elon Musk wrote in a post on X.
Tesla has restarted work on its Dojo 3 initiative, its in-house AI training supercomputer, now that its AI5 chip design has reached a stable stage.
Tesla CEO Elon Musk confirmed the update in a recent post on X.
Tesla’s Dojo 3 initiative restarted
In a post on X, Musk said that with the AI5 chip design now “in good shape,” Tesla will resume work on Dojo 3. He added that Tesla is hiring engineers interested in working on what he expects will become the highest-volume AI chips in the world.
“Now that the AI5 chip design is in good shape, Tesla will restart work on Dojo3. If you’re interested in working on what will be the highest volume chips in the world, send a note to AI_Chips@Tesla.com with 3 bullet points on the toughest technical problems you’ve solved,” Musk wrote in his post on X.
Musk’s comment followed a series of recent posts outlining Tesla’s broader AI chip roadmap. In another update, he stated that Tesla’s AI4 chip alone would achieve self-driving safety levels well above human drivers, AI5 would make vehicles “almost perfect” while significantly enhancing Optimus, and AI6 would be focused on Optimus and data center applications.
Musk then highlighted that AI7/Dojo 3 will be designed to support space-based AI compute.
Tesla’s AI roadmap
Musk’s latest comments helped resolve some confusion that emerged last year about Project Dojo’s future. At the time, Musk stated on X that Tesla was stepping back from Dojo because it did not make sense to split resources across multiple AI chip architectures.
He suggested that clustering large numbers of Tesla AI5 and AI6 chips for training could effectively serve the same purpose as a dedicated Dojo successor. “In a supercomputer cluster, it would make sense to put many AI5/AI6 chips on a board, whether for inference or training, simply to reduce network cabling complexity & cost by a few orders of magnitude,” Musk wrote at the time.
Musk later reinforced that idea by responding positively to an X post stating that Tesla’s AI6 chip would effectively be the new Dojo. Considering his recent updates on X, however, it appears that Tesla will be using AI7, not AI6, as its dedicated Dojo successor. The CEO did state that Tesla’s AI7, AI8, and AI9 chips will be developed in short, nine-month cycles, so Dojo’s deployment might actually be sooner than expected.
Elon Musk
Elon Musk’s xAI brings 1GW Colossus 2 AI training cluster online
Elon Musk shared his update in a recent post on social media platform X.
xAI has brought its Colossus 2 supercomputer online, making it the first gigawatt-scale AI training cluster in the world, and it’s about to get even bigger in a few months.
Elon Musk shared his update in a recent post on social media platform X.
Colossus 2 goes live
The Colossus 2 supercomputer, together with its predecessor, Colossus 1, are used by xAI to primarily train and refine the company’s Grok large language model. In a post on X, Musk stated that Colossus 2 is already operational, making it the first gigawatt training cluster in the world.
But what’s even more remarkable is that it would be upgraded to 1.5 GW of power in April. Even in its current iteration, however, the Colossus 2 supercomputer already exceeds the peak demand of San Francisco.
Commentary from users of the social media platform highlighted the speed of execution behind the project. Colossus 1 went from site preparation to full operation in 122 days, while Colossus 2 went live by crossing the 1-GW barrier and is targeting a total capacity of roughly 2 GW. This far exceeds the speed of xAI’s primary rivals.
Funding fuels rapid expansion
xAI’s Colossus 2 launch follows xAI’s recently closed, upsized $20 billion Series E funding round, which exceeded its initial $15 billion target. The company said the capital will be used to accelerate infrastructure scaling and AI product development.
The round attracted a broad group of investors, including Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX, and Baron Capital Group. Strategic partners NVIDIA and Cisco also continued their support, helping xAI build what it describes as the world’s largest GPU clusters.
xAI said the funding will accelerate its infrastructure buildout, enable rapid deployment of AI products to billions of users, and support research tied to its mission of understanding the universe. The company noted that its Colossus 1 and 2 systems now represent more than one million H100 GPU equivalents, alongside recent releases including the Grok 4 series, Grok Voice, and Grok Imagine. Training is also already underway for its next flagship model, Grok 5.
Elon Musk
Tesla AI5 chip nears completion, Elon Musk teases 9-month development cadence
The Tesla CEO shared his recent insights in a post on social media platform X.
Tesla’s next-generation AI5 chip is nearly complete, and work on its successor is already underway, as per a recent update from Elon Musk.
The Tesla CEO shared his recent insights in a post on social media platform X.
Musk details AI chip roadmap
In his post, Elon Musk stated that Tesla’s AI5 chip design is “almost done,” while AI6 has already entered early development. Musk added that Tesla plans to continue iterating rapidly, with AI7, AI8, AI9, and future generations targeting a nine-month design cycle.
He also noted that Tesla’s in-house chips could become the highest-volume AI processors in the world. Musk framed his update as a recruiting message, encouraging engineers to join Tesla’s AI and chip development teams.
Tesla community member Herbert Ong highlighted the strategic importance of the timeline, noting that faster chip cycles enable quicker learning, faster iteration, and a compounding advantage in AI and autonomy that becomes increasingly difficult for competitors to close.
AI5 manufacturing takes shape
Musk’s comments align with earlier reporting on AI5’s production plans. In December, it was reported that Samsung is preparing to manufacture Tesla’s AI5 chip, accelerating hiring for experienced engineers to support U.S. production and address complex foundry challenges.
Samsung is one of two suppliers selected for AI5, alongside TSMC. The companies are expected to produce different versions of the AI5 chip, with TSMC reportedly using a 3nm process and Samsung using a 2nm process.
Musk has previously stated that while different foundries translate chip designs into physical silicon in different ways, the goal is for both versions of the Tesla AI5 chip to operate identically. AI5 will succeed Tesla’s current AI4 hardware, formerly known as Hardware 4, and is expected to support the company’s Full Self-Driving system as well as other AI-driven efforts, including Optimus.