News
SpaceX installs Raptor Vacuum engine on first orbital-class Starship
Update: Providing the best views yet of the Raptor Vacuum installation process, SpaceX began installing one of Starship S20’s six engines (one of at least two recently trucked to the launch site) on Monday morning.
It remains to be seen exactly how many engines will be installed on Ship 20 or how many will be ignited during its first static fire test but barring the delivery of more Raptors, signs currently point to an initial test of two engines – one sea-level-optimized Raptor Center (RC) and one Raptor Vacuum with a much larger nozzle. Whenever Ship 20 does fire up those engines, it will be the first static fire of a RVac engine installed on a Starship and the first simultaneous, side-by-side static fire of two different Raptor variants. Since publishing time, SpaceX has cancelled a Tuesday road closure, pushing Starship S20’s first static fire attempt to no earlier than (NET) Wednesday evening.
For the third time in two months, SpaceX has begun installing Raptor engines on its first orbital-class Starship prototype – hopefully for good.
In no uncertain terms, Starship 20’s (S20) path to what could be its last Raptor installations has been about as windy and mysterious as they come. Starship 20 (S20) left the Starbase factory floor for the first time in early August – all six Raptors installed in another program first – for a brief fit check and photo op. After spending about an hour installed on top of Super Heavy Booster 4 (B4), Ship 20 was removed and returned to the build site, where teams removed all six engines and finished wiring and plumbing the vehicle.
Days before the ship’s long-anticipated trip to Starbase’s suborbital launch site for qualification testing, the mount SpaceX prepared for the process quickly had hydraulic rams – used to safely simulate Raptor thrust – were abruptly removed. Starship S20 was then installed on the Pad B mount, where SpaceX proceeded to reinstall six Raptors. Weeks later, after slow heat shield repairs neared completion, SpaceX again removed Ship 20’s Raptors and reinstalled the hydraulic rams it had removed – unused – the month prior. Finally, on September 30th, some seven weeks after the prototype arrived at the suborbital launch site, SpaceX put Starship S20 through its first major test – a lengthy ‘cryoproof’.
Now, ten days after completing a seemingly flawless cryoproof test on its first try, SpaceX has once again trucked multiple Raptors – at least one sea level and one vacuum engine – from the Starbase build site to Starship S20’s suborbital test stand. From the outside looking in, it’s hard not to view the contradictory path S20 took to its first tests – and is still taking to its first static fire(s) – as an unusually visible sign of some kind of internal tug of war or major communication failure between different SpaceX groups or executives.
It’s impossible to determine anything specific beyond the apparent fact that several of the steps taken from Ship 20’s first factory departure to its first cryoproof and static fire tests could have probably been deleted entirely with no harm done and many dozens of hours of work saved. At the end of the day, Starship S20 completed cryoproof testing without issue on the first try and is now seemingly on track to begin its first static fire test campaign later this month.
At the moment, SpaceX has three possible static fire test windows scheduled from 5pm to midnight CDT on Tuesday, Wednesday, and Thursday (Oct 12-14). A similar Monday window was canceled days ago on October 7th, suggesting that more cancellations are probably on the horizon. For now, there’s a chance that Starship S20 – with anywhere from two to all six Raptor engines installed – will fire up for the first time before next weekend. It’s hard to say how exactly SpaceX will proceed. It’s not inconceivable that SpaceX will install all six engines and gradually ramp up to a full six-engine static fire over several tests.

Given that SpaceX has already static fired three Raptor Center (RC) engines on multiple Starship and Super Heavy prototypes, odds are good that Starship S20’s test campaign will be similar – beginning with a three-Raptor static fire, in other words. SpaceX could then add one, two, or all three Raptor Vacuum engines into the fray for one or more additional tests with 4-6 engines total. It’s also possible that suborbital launch mount and pad limitations will prevent more than three engines from firing at once, in which case SpaceX would presumably perform two separate tests of Ship 20’s Raptor Center and Raptor Vacuum engines.
Given that two Raptor variants have never been static fired simultaneously on the same vehicle, it’s hard to imagine that SpaceX won’t also want to perform one or several combined static fires with Raptor Vacuum and Raptor Center engines on Ship 20.
Elon Musk
Elon Musk teases crazy outlook for xAI against its competitors
Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.
Elon Musk has never been one to shy away from crazy timelines, massive expectations, and outrageous outlooks. However, his recent plans for xAI and where he believes it will end up compared to its competitors are sure to stimulate conversation.
In a bold and characteristic response on X, Elon Musk fired back at a recent analysis that positioned his AI venture, xAI, as lagging behind industry frontrunners.
The post, from March 14, came as a direct reply to forecaster Peter Wildeford’s assessment, which drew from benchmarks and reporting to rank AI developers.
xAI will catch up this year and then exceed them all by such a long distance in 3 years that you will need the James Webb telescope to see who is in second place
— Elon Musk (@elonmusk) March 14, 2026
Wildeford placed Anthropic, Google, and OpenAI in a virtual tie at the top, with xAI and Meta trailing by about seven months. Chinese players like Moonshot, Deepseek, zAI, and Alibaba were estimated to be nine months behind, while France’s Mistral lagged by about a year and a half.
Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.
He claimed xAI would “catch up this year,” meaning by the end of 2026, erasing that seven-month deficit against the leaders. But he didn’t stop there.
Musk escalated his vision to 2029, predicting xAI would “exceed them all by such a long distance” that observers would need the James Webb Space Telescope, NASA’s orbiting observatory stationed about 930,000 miles from Earth, to spot whoever lands in second place. This analogy underscores Musk’s confidence in xAI’s trajectory, implying an astronomical lead that could redefine the AI landscape.
Breaking down these claims reveals Musk’s strategic optimism. First, the short-term catch-up: xAI, launched in 2023, has already released models like Grok, but recent benchmarks, including those for Grok 4.2, have shown it falling short in capabilities compared to rivals.
Anthropic’s Claude series, Google’s Gemini, and OpenAI’s GPT models dominate in areas like reasoning, coding, and multimodal tasks. Musk’s assertion suggests aggressive scaling in compute, talent, or architecture, perhaps leveraging xAI’s ties to Tesla’s Dojo supercomputers or Musk’s vast resources, to close the gap swiftly.
The longer-term dominance by 2029 paints an even more audacious picture. Musk envisions xAI not just parity but supremacy, outpacing competitors in innovation speed and model sophistication.
This could involve breakthroughs in energy-efficient training, real-world integration, like Tesla’s robotics, or ethical AI alignment, aligning with Musk’s stated goal of “understanding the universe.”
Critics, however, point to parallels with Tesla’s Full Self-Driving delays; one reply highlighted Musk’s 2023 promise of FSD readiness. Musk has made this promise for many years, and although the system has been strong and improving, it is still a ways off from the completely autonomous operation that was expected by now.
Tesla Full Self-Driving v14.2.2.5 might be the most confusing release ever
Musk’s comment highlights the intensifying U.S.-centric AI race, with xAI challenging the “three-way” dominance noted by Wharton professor Ethan Mollick, whom Wildeford quoted. As geopolitical tensions rise—evident in the Chinese firms’ lag—Musk’s tease could spur investment and talent wars.
Yet, it also invites scrutiny: Will xAI deliver, or is this another telescope-needed mirage? In an industry where timelines slip but stakes soar, Musk’s words keep the spotlight on xAI’s ambitious path forward.
Elon Musk
Tesla Terafab set for launch: Inside the $20B AI chip factory that will reshape the auto industry
Tesla set to launch “Terafab Project: A vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.
Tesla is making one of the boldest bets in its history. On March 14, Elon Musk posted on X that the “Terafab Project launches in 7 days,” pointing to March 21, 2026 as the start date for what he has described as a vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.
Tesla first confirmed Terafab on its January 28, 2026 earnings call, where Musk told investors the company needs to build a chip fabrication facility to avoid a supply constraint projected to materialize within three to four years. But the seeds were planted even earlier. At Tesla’s annual general meeting last year, Musk warned that even in the best-case scenario for chip production from their suppliers, it still wouldn’t be enough, and declared that building a “gigantic chip fab” simply had to be done.
While there has been no official announcement on where Tesla plans to break ground on the massive Terafab, all signs point to the North Campus of Giga Texas in Austin.
Months of speculation has surrounded Tesla’s North Campus expansion at Giga Texas, where drone footage captured by observer Joe Tegtmeyer revealed massive construction site preparation just north of the existing factory on a scale that rivals the original Giga Texas footprint itself.
Samsung’s Tesla AI5/AI6 chip factory to start key equipment tests in March: report
The project is projected to produce 100–200 billion AI and memory chips annually, targeting 100,000 wafer starts per month, at an estimated cost of $20 billion. Tesla is targeting 2-nanometre process technology and anticipated to be the most advanced node currently in commercial production. Dubbed the Tesla AI5 chip, the chip will pack 40x–50x more compute performance and 9x more memory than AI4, and will be among the first products Terafab factory is set to produce. This highly optimized, and massively powerful inference chip is designed to make full self-driving (FSD) and Tesla’s Optimus robots faster, safer, and with full autonomy.
This is where Terafab becomes a genuine game-changer. If Tesla successfully builds a 2nm chip fab at scale, it becomes one of only a handful of entities that’s capable of producing AI silicon in-house, with competitive implications that extend far beyond Tesla’s own vehicles, and potentially positioning Tesla as a chip supplier or licensor to other industries.

Credit: @serobinsonjr/X
The next-gen Tesla AI chips will power advancements in Full Self-Driving software, the Cybercab Robotaxi program, and the Optimus humanoid robot line. Musk’s projections for Optimus require chip volumes that no existing external supplier can commit to on Tesla’s timeline.Competitors like Waymo and GM’s Cruise remain dependent on third-party silicon, leaving them exposed to the same supply chain vulnerabilities Tesla is now working to eliminate entirely.
The Terafab launch this week may not mean a factory opens its doors overnight, but it signals Tesla is serious about owning the entire AI stack, from software to silicon.
Elon Musk
What is Digital Optimus? The new Tesla and xAI project explained
At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.
Tesla and xAI announced their groundbreaking joint project, Digital Optimus, also nicknamed “Macrohard” in a humorous jab at Microsoft, earlier this week.
This software-based AI agent is designed to automate complex office workflows by observing and replicating human interactions with computers. As the first major outcome of Tesla’s $2 billion investment in xAI, it represents a powerful fusion of hardware efficiency and advanced reasoning.
At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.
Macrohard or Digital Optimus is a joint xAI-Tesla project, coming as part of Tesla’s investment agreement with xAI.
Grok is the master conductor/navigator with deep understanding of the world to direct digital Optimus, which is processing and actioning the past 5 secs of…
— Elon Musk (@elonmusk) March 11, 2026
Tesla’s specialized AI acts as “System 1”—the fast, instinctive executor—processing the past five seconds of real-time computer screen video along with keyboard and mouse actions to perform immediate tasks.
xAI’s Grok model serves as “System 2,” the strategic “master conductor” or navigator, providing high-level reasoning, world understanding, and directional oversight, much like an advanced turn-by-turn navigation system.
When combined, the two can create a powerful AI-based assistant that can complete everything from accounting work to HR tasks.
Will Tesla join the fold? Predicting a triple merger with SpaceX and xAI
The system runs primarily on Tesla’s low-cost AI4 inference chip, minimizing expensive Nvidia resources from xAI for competitive, real-time performance.
Elon Musk described it as “the only real-time smart AI system” capable, in principle, of emulating the functions of entire companies, handling everything from accounting and HR to repetitive digital operations.
Timelines point to swift deployment. Announced just days ago, Musk expects Digital Optimus to be ready for user experience within about six months, targeting rollout around September 2026.
It will integrate into all AI4-equipped Tesla vehicles, enabling parked cars to handle office work during downtime. Millions of dedicated units are also planned for deployment at Supercharger stations, tapping into roughly 7 gigawatts of available power.
Oh and it works in all AI4-equipped cars, so your car can do office work for you when not driving.
We’re also deploying millions of dedicated Digital Optimus units in the field at Superchargers where we have ~7 gigawatts of available power.
— Elon Musk (@elonmusk) March 12, 2026
Digital Optimus directly supports Tesla’s broader autonomy strategy. It leverages the same end-to-end neural networks, computer vision, and real-time decision-making tech that power Full Self-Driving (FSD) software and the physical Optimus humanoid robot.
By repurposing idle vehicle compute and extending AI4 hardware beyond driving, the project scales Tesla’s autonomy ecosystem from roads to digital workspaces.
As a virtual counterpart to physical Optimus, it divides labor: software agents manage screen-based tasks while humanoid robots tackle physical ones, accelerating Tesla’s vision of general-purpose AI for productivity, Robotaxi fleets, and beyond.
In essence, Digital Optimus bridges Tesla’s vehicle and robotics autonomy with enterprise-scale AI, promising massive efficiency gains. No other company currently matches its real-time capabilities on such accessible hardware.
It really could be one of the most crucial developments Tesla and xAI begin to integrate, as it could revolutionize how people work and travel.
