Connect with us

News

SpaceX wiggles Starhopper’s Raptor engine, tests parts ahead of hover test debut

Repeating a test conducted in June with Raptor SN04, SpaceX tested Starhopper and Raptor SN06's thrust vectoring capabilities on July 12th. (NASASpaceflight - bocachicagal)

Published

on

On the evening of July 12th, SpaceX technicians put Starhopper’s freshly-installed Raptor – serial number 06 (SN06) – through a simple but decidedly entertaining test, effectively wiggling the engine in circles.

Designed to verify that Raptor’s thrust vectoring capabilities are in order and ensure that Starhopper and the engine are properly communicating, the wiggle test is a small but critical part of pre-flight acceptance and a good indicator that the low-fidelity Starship prototype is nearing its first hover test(s). Roughly 48 hours after a successful series of wiggles, Starhopper and Raptor proceeded into the next stage of pre-flight acceptance, likely the final more step before a tethered static fire.

Routine for all Falcon rockets, SpaceX’s exceptionally rigorous practice of static firing all hardware at least once (and often several times) before launch has unsurprisingly held firm as the company proceeds towards integrated Starhopper and Starship flight tests. Despite the fact that Raptor SN06 completed a static fire as recently July 10th, SpaceX will very likely put Starhopper and its newly-installed Raptor through yet another pre-flight static fire, perhaps its fourth or fifth test this month.

Although it would undoubtedly be easier, cheaper, and faster to skip that post-delivery static fire, it will simultaneously lower the risk of Raptor failing mid-flight and verify that Starhopper itself is healthy and ready for untethered hovering. Although SpaceX could likely live without Starhopper in the event that it’s lost during flight-testing, any failure capable of destroying the vehicle itself is at least as capable of severely damaging or completely destroying the spartan but still expansive test and launch facilities the company built over the course of several months.

SpaceX has been hard at work gradually building, expanding, and upgrading its South Texas launch facilities since December 2018. (NASASpaceflight – bocachicagal, 04/27/2019)

Would you like some testing with your testing?

Follow July 12th’s nighttime Raptor wiggle test, July 13th was mainly quiet and filled with inspections of Starhopper, Raptor, and other various work. The day after, however, SpaceX proceeded through several hours of propellant loading, ending with what looked like less energetic versions of the Raptor preburner ignition tests Starhopper previously performed with Raptor SN02.

In a staged-combustion engine like Raptor, getting from the supercool liquid oxygen and methane propellant to 200+ tons of thrust is quite literally staged, meaning that the ignition doesn’t happen all at once. Rather, the preburners – essentially their own, unique combustion chambers – ignite an oxygen- or methane-rich mixture, the burning of which produces the gas and pressure that powers the turbines that bring fuel into the main combustion chamber. That fuel then ignites, producing thrust as they exit the engine’s bell-shaped nozzle.

The first obvious test occurred around 7:30pm CT, July 14th. (LabPadre)
The second obvious test followed around 8:50 pm CT. (LabPadre)

Although the fireworks are so subtle that they are easily missed, the conditions inside the preburner – hidden away from view – are actually far more intense than the iconic blue, purple, and pink flame that exists Raptor’s nozzle. This is because the preburners have to nurture the conditions necessary for the pumps they power to fuel the main combustion chamber. Much like hot water will cool while traveling through pipes, the superheated gaseous propellant that Raptor ignites to produce thrust will also cool (and thus lose pressure) as it travels from Raptor’s preburner to the main combustion chamber.

Thus, if the head pressure produced in the preburners is too low, Raptor’s thrust will be (roughly speaking) proportionally limited at best. At worst, low pressure in the preburners can completely prevent Raptor from starting and running stably and can even trigger a “hard start” or shutdown that could damage or destroy the engine. As such, to preburners fundamentally have to operate at higher chamber pressures (and thus higher temperatures) than the main combustion chamber (the big firey bit at the end). According to Elon Musk, Raptor’s oxygen preburner has the worst of it, operating at pressures as high or higher than 800 bar (11,600 psi, 80 megapascals).

Coincidentally, this is roughly equivalent to the pressure at the bottom of the Pacific Ocean.

Starhopper and Raptor seen on the afternoon of July 14th, preparing for an evening of testing. (NASASpaceflight – bocachicagal)

In short, preburner testing is no less critical than full-on static fire testing with an engine like Raptor. July 14th’s test was also made doubly efficient due to the fact that preburner testing requires liquid propellant, which effectively makes the whole test a wet dress rehearsal (WDR) even before any engine ignition or partial ignition is involved. Per SpaceX moving from propellant loading to preburner/turbine testing, Starhopper is almost certainly healthy and operating as expected, an excellent sign that the ungainly vessel may be ready for a static fire of Raptor as early as 2pm CT, July 15th.

The memes, oh, the memes.

Check out Teslarati’s Marketplace! We offer Tesla accessories, including for the Tesla Cybertruck and Tesla Model 3.

Advertisement

Eric Ralph is Teslarati's senior spaceflight reporter and has been covering the industry in some capacity for almost half a decade, largely spurred in 2016 by a trip to Mexico to watch Elon Musk reveal SpaceX's plans for Mars in person. Aside from spreading interest and excitement about spaceflight far and wide, his primary goal is to cover humanity's ongoing efforts to expand beyond Earth to the Moon, Mars, and elsewhere.

Advertisement
Comments

Elon Musk

Elon Musk teases crazy outlook for xAI against its competitors

Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.

Published

on

Credit: NVIDIA

Elon Musk has never been one to shy away from crazy timelines, massive expectations, and outrageous outlooks. However, his recent plans for xAI and where he believes it will end up compared to its competitors are sure to stimulate conversation.

In a bold and characteristic response on X, Elon Musk fired back at a recent analysis that positioned his AI venture, xAI, as lagging behind industry frontrunners.

The post, from March 14, came as a direct reply to forecaster Peter Wildeford’s assessment, which drew from benchmarks and reporting to rank AI developers.

Wildeford placed Anthropic, Google, and OpenAI in a virtual tie at the top, with xAI and Meta trailing by about seven months. Chinese players like Moonshot, Deepseek, zAI, and Alibaba were estimated to be nine months behind, while France’s Mistral lagged by about a year and a half.

Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.

He claimed xAI would “catch up this year,” meaning by the end of 2026, erasing that seven-month deficit against the leaders. But he didn’t stop there.

Musk escalated his vision to 2029, predicting xAI would “exceed them all by such a long distance” that observers would need the James Webb Space Telescope, NASA’s orbiting observatory stationed about 930,000 miles from Earth, to spot whoever lands in second place. This analogy underscores Musk’s confidence in xAI’s trajectory, implying an astronomical lead that could redefine the AI landscape.

Breaking down these claims reveals Musk’s strategic optimism. First, the short-term catch-up: xAI, launched in 2023, has already released models like Grok, but recent benchmarks, including those for Grok 4.2, have shown it falling short in capabilities compared to rivals.

Anthropic’s Claude series, Google’s Gemini, and OpenAI’s GPT models dominate in areas like reasoning, coding, and multimodal tasks. Musk’s assertion suggests aggressive scaling in compute, talent, or architecture, perhaps leveraging xAI’s ties to Tesla’s Dojo supercomputers or Musk’s vast resources, to close the gap swiftly.

The longer-term dominance by 2029 paints an even more audacious picture. Musk envisions xAI not just parity but supremacy, outpacing competitors in innovation speed and model sophistication.

This could involve breakthroughs in energy-efficient training, real-world integration, like Tesla’s robotics, or ethical AI alignment, aligning with Musk’s stated goal of “understanding the universe.”

Critics, however, point to parallels with Tesla’s Full Self-Driving delays; one reply highlighted Musk’s 2023 promise of FSD readiness. Musk has made this promise for many years, and although the system has been strong and improving, it is still a ways off from the completely autonomous operation that was expected by now.

Tesla Full Self-Driving v14.2.2.5 might be the most confusing release ever

Musk’s comment highlights the intensifying U.S.-centric AI race, with xAI challenging the “three-way” dominance noted by Wharton professor Ethan Mollick, whom Wildeford quoted. As geopolitical tensions rise—evident in the Chinese firms’ lag—Musk’s tease could spur investment and talent wars.

Yet, it also invites scrutiny: Will xAI deliver, or is this another telescope-needed mirage? In an industry where timelines slip but stakes soar, Musk’s words keep the spotlight on xAI’s ambitious path forward.

Continue Reading

Elon Musk

Tesla Terafab set for launch: Inside the $20B AI chip factory that will reshape the auto industry

Tesla set to launch “Terafab Project: A vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.

Published

on

By

Tesla is making one of the boldest bets in its history. On March 14, Elon Musk posted on X that the “Terafab Project launches in 7 days,” pointing to March 21, 2026 as the start date for what he has described as a vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.

Tesla first confirmed Terafab on its January 28, 2026 earnings call, where Musk told investors the company needs to build a chip fabrication facility to avoid a supply constraint projected to materialize within three to four years. But the seeds were planted even earlier. At Tesla’s annual general meeting last year, Musk warned that even in the best-case scenario for chip production from their suppliers, it still wouldn’t be enough, and declared that building a “gigantic chip fab” simply had to be done.

While there has been no official announcement on where Tesla plans to break ground on the massive Terafab, all signs point to the North Campus of Giga Texas in Austin.

Months of speculation has surrounded Tesla’s North Campus expansion at Giga Texas, where drone footage captured by observer Joe Tegtmeyer revealed massive construction site preparation just north of the existing factory on a scale that rivals the original Giga Texas footprint itself.

Samsung’s Tesla AI5/AI6 chip factory to start key equipment tests in March: report

The project is projected to produce 100–200 billion AI and memory chips annually, targeting 100,000 wafer starts per month, at an estimated cost of $20 billion. Tesla is targeting 2-nanometre process technology and anticipated to be the most advanced node currently in commercial production. Dubbed the Tesla AI5 chip, the chip will pack 40x–50x more compute performance and 9x more memory than AI4, and will be among the first products Terafab factory is set to produce. This highly optimized, and massively powerful inference chip is designed to make full self-driving (FSD) and Tesla’s Optimus robots faster, safer, and with full autonomy.

tesla-optimus-pilot-production-line

(Credit: Tesla)

This is where Terafab becomes a genuine game-changer. If Tesla successfully builds a 2nm chip fab at scale, it becomes one of only a handful of entities that’s capable of producing AI silicon in-house, with competitive implications that extend far beyond Tesla’s own vehicles, and potentially positioning Tesla as a chip supplier or licensor to other industries.

The next-gen Tesla AI chips will power advancements in Full Self-Driving software, the Cybercab Robotaxi program, and the Optimus humanoid robot line. Musk’s projections for Optimus require chip volumes that no existing external supplier can commit to on Tesla’s timeline.Competitors like Waymo and GM’s Cruise remain dependent on third-party silicon, leaving them exposed to the same supply chain vulnerabilities Tesla is now working to eliminate entirely.

The Terafab launch this week may not mean a factory opens its doors overnight, but it signals Tesla is serious about owning the entire AI stack, from software to silicon.

Continue Reading

Elon Musk

What is Digital Optimus? The new Tesla and xAI project explained

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Published

on

Credit: Grok

Tesla and xAI announced their groundbreaking joint project, Digital Optimus, also nicknamed “Macrohard” in a humorous jab at Microsoft, earlier this week.

This software-based AI agent is designed to automate complex office workflows by observing and replicating human interactions with computers. As the first major outcome of Tesla’s $2 billion investment in xAI, it represents a powerful fusion of hardware efficiency and advanced reasoning.

Tesla announces massive investment into xAI

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Tesla’s specialized AI acts as “System 1”—the fast, instinctive executor—processing the past five seconds of real-time computer screen video along with keyboard and mouse actions to perform immediate tasks.

xAI’s Grok model serves as “System 2,” the strategic “master conductor” or navigator, providing high-level reasoning, world understanding, and directional oversight, much like an advanced turn-by-turn navigation system.

When combined, the two can create a powerful AI-based assistant that can complete everything from accounting work to HR tasks.

Will Tesla join the fold? Predicting a triple merger with SpaceX and xAI

The system runs primarily on Tesla’s low-cost AI4 inference chip, minimizing expensive Nvidia resources from xAI for competitive, real-time performance.

Elon Musk described it as “the only real-time smart AI system” capable, in principle, of emulating the functions of entire companies, handling everything from accounting and HR to repetitive digital operations.

Timelines point to swift deployment. Announced just days ago, Musk expects Digital Optimus to be ready for user experience within about six months, targeting rollout around September 2026.

It will integrate into all AI4-equipped Tesla vehicles, enabling parked cars to handle office work during downtime. Millions of dedicated units are also planned for deployment at Supercharger stations, tapping into roughly 7 gigawatts of available power.

Digital Optimus directly supports Tesla’s broader autonomy strategy. It leverages the same end-to-end neural networks, computer vision, and real-time decision-making tech that power Full Self-Driving (FSD) software and the physical Optimus humanoid robot.

By repurposing idle vehicle compute and extending AI4 hardware beyond driving, the project scales Tesla’s autonomy ecosystem from roads to digital workspaces.

As a virtual counterpart to physical Optimus, it divides labor: software agents manage screen-based tasks while humanoid robots tackle physical ones, accelerating Tesla’s vision of general-purpose AI for productivity, Robotaxi fleets, and beyond.

In essence, Digital Optimus bridges Tesla’s vehicle and robotics autonomy with enterprise-scale AI, promising massive efficiency gains. No other company currently matches its real-time capabilities on such accessible hardware.

It really could be one of the most crucial developments Tesla and xAI begin to integrate, as it could revolutionize how people work and travel.

Continue Reading