Connect with us

News

SpaceX granted additional $40.7M by U.S. Air Force for “BFR engine” development

The BFR spaceship pictured landing on Mars. (SpaceX)

Published

on

SpaceX has been granted an additional $40.7 million in funding by the U.S. Air Force for development of the company’s Raptor engine.

According to a U.S. Department of Defense contract published on October 19, 2017 on defense.gov, the new funding will go towards development of a new liquid oxygen and liquid methane engine for the department’s Evolved Expendable Launch Vehicle (EELV) program. Each Raptor engine is expected to have three times the thrust of SpaceX’s Merlin 1D engine that’s currently used in the Falcon 9. The heavy lift-capable engine will support the launch of heavier payloads including large military satellites into orbit, but also serve as the foundation for SpaceX’s Interplanetary Transport System, or more recently referred to as BFR.

The Department of Defense issued the following contract:

Space Exploration Technologies Corp., Hawthorne, California, has been awarded a $40,766,512 modification (P00007) for the development of the Raptor rocket propulsion system prototype for the Evolved Expendable Launch Vehicle program.  Work will be performed at NASA Stennis Space Center, Mississippi; Hawthorne, California; McGregor, Texas; and Los Angeles Air Force Base, California; and is expected to be complete by April 30, 2018.  Fiscal 2017 research, development, test and evaluation funds in the amount of $40,766,512 are being obligated at the time of award.  The Launch Systems Enterprise Directorate, Space and Missile Systems Center, Los Angeles AFB, California, is the contracting activity (FA8811-16-9-0001).

It’s not clear how the Air Force will utilize the powerful Raptor engines after its completion which is expected to take place by the end of April 2018, but we do know that development will take place in various locations, including SpaceX’s headquarters in Hawthorne, California; NASA’s Stennis Space Center in south Mississippi; and the Los Angeles Air Force Base, home to its Missile Systems Center.

Advertisement
-->

The powerful methane-oxygen Raptor engine is intended to be the workhorse for any larger launch vehicle. SpaceX has conducted several dozen successful hot fires at the company’s McGregor, Texas testing facilities. Tests ranged from just a few seconds to 100 seconds in duration, with the only limiting factor to test duration being the size of the propellant tank for fuel.

Serial tech entrepreneur and SpaceX CEO Elon Musk recently spoke about Raptor engine development in his Mars-focused address at the International Astronautical Congress (IAC) in Australia.

“We already have now 1,200 seconds of firing across 42 main engine tests,” said Musk. “We’ve fired it for 100 seconds. It could fire for much longer than 100 seconds. That’s just the size of the test tanks.”

Advertisement
-->

Musk also provided insight on why SpaceX engineers decided to reduce the originally intended size and thrust capabilities of the Raptor engine. Musk explained to fans during a recent Reddit “Ask me anything” that the SpaceX team simply “chickened out”, due to a variety of reasons.

 

I'm friendly. You can email me. gene@teslarati.com

Advertisement
Comments

News

Nvidia CEO Jensen Huang explains difference between Tesla FSD and Alpamayo

“Tesla’s FSD stack is completely world-class,” the Nvidia CEO said.

Published

on

Credit: Grok Imagine

NVIDIA CEO Jensen Huang has offered high praise for Tesla’s Full Self-Driving (FSD) system during a Q&A at CES 2026, calling it “world-class” and “state-of-the-art” in design, training, and performance. 

More importantly, he also shared some insights about the key differences between FSD and Nvidia’s recently announced Alpamayo system. 

Jensen Huang’s praise for Tesla FSD

Nvidia made headlines at CES following its announcement of Alpamayo, which uses artificial intelligence to accelerate the development of autonomous driving solutions. Due to its focus on AI, many started speculating that Alpamayo would be a direct rival to FSD. This was somewhat addressed by Elon Musk, who predicted that “they will find that it’s easy to get to 99% and then super hard to solve the long tail of the distribution.”

During his Q&A, Nvidia CEO Jensen Huang was asked about the difference between FSD and Alpamayo. His response was extensive:

“Tesla’s FSD stack is completely world-class. They’ve been working on it for quite some time. It’s world-class not only in the number of miles it’s accumulated, but in the way it’s designed, the way they do training, data collection, curation, synthetic data generation, and all of their simulation technologies. 

Advertisement
-->

“Of course, the latest generation is end-to-end Full Self-Driving—meaning it’s one large model trained end to end. And so… Elon’s AD system is, in every way, 100% state-of-the-art. I’m really quite impressed by the technology. I have it, and I drive it in our house, and it works incredibly well,” the Nvidia CEO said. 

Nvidia’s platform approach vs Tesla’s integration

Huang also stated that Nvidia’s Alpamayo system was built around a fundamentally different philosophy from Tesla’s. Rather than developing self-driving cars itself, Nvidia supplies the full autonomous technology stack for other companies to use.

“Nvidia doesn’t build self-driving cars. We build the full stack so others can,” Huang said, explaining that Nvidia provides separate systems for training, simulation, and in-vehicle computing, all supported by shared software.

He added that customers can adopt as much or as little of the platform as they need, noting that Nvidia works across the industry, including with Tesla on training systems and companies like Waymo, XPeng, and Nuro on vehicle computing.

“So our system is really quite pervasive because we’re a technology platform provider. That’s the primary difference. There’s no question in our mind that, of the billion cars on the road today, in another 10 years’ time, hundreds of millions of them will have great autonomous capability. This is likely one of the largest, fastest-growing technology industries over the next decade.”

Advertisement
-->

He also emphasized Nvidia’s open approach, saying the company open-sources its models and helps partners train their own systems. “We’re not a self-driving car company. We’re enabling the autonomous industry,” Huang said.

Continue Reading

Elon Musk

Elon Musk confirms xAI’s purchase of five 380 MW natural gas turbines

The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.

Published

on

Credit: xAI/X

xAI, Elon Musk’s artificial intelligence startup, has purchased five additional 380 MW natural gas turbines from South Korea’s Doosan Enerbility to power its growing supercomputer clusters. 

The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.

xAI’s turbine deal details

News of xAI’s new turbines was shared on social media platform X, with user @SemiAnalysis_ stating that the turbines were produced by South Korea’s Doosan Enerbility. As noted in an Asian Business Daily report, Doosan Enerbility announced last October that it signed a contract to supply two 380 MW gas turbines for a major U.S. tech company. Doosan later noted in December that it secured an order for three more 380 MW gas turbines.

As per the X user, the gas turbines would power an additional 600,000+ GB200 NVL72 equivalent size cluster. This should make xAI’s facilities among the largest in the world. In a reply, Elon Musk confirmed that xAI did purchase the turbines. “True,” Musk wrote in a post on X. 

xAI’s ambitions 

Recent reports have indicated that xAI closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development. The funding, as per the AI startup, “will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products.”

Advertisement
-->

The company also teased the rollout of its upcoming frontier AI model. “Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote in a post on its website. 

Continue Reading

Elon Musk

Elon Musk’s xAI closes upsized $20B Series E funding round

xAI announced the investment round in a post on its official website. 

Published

on

xAI-supercomputer-memphis-environment-pushback
Credit: xAI

xAI has closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development. 

xAI announced the investment round in a post on its official website. 

A $20 billion Series E round

As noted by the artificial intelligence startup in its post, the Series E funding round attracted a diverse group of investors, including Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX, and Baron Capital Group, among others. 

Strategic partners NVIDIA and Cisco Investments also continued support for building the world’s largest GPU clusters.

As xAI stated, “This financing will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products reaching billions of users, and fuel groundbreaking research advancing xAI’s core mission: Understanding the Universe.”

Advertisement
-->

xAI’s core mission

Th Series E funding builds on xAI’s previous rounds, powering Grok advancements and massive compute expansions like the Memphis supercluster. The upsized demand reflects growing recognition of xAI’s potential in frontier AI.

xAI also highlighted several of its breakthroughs in 2025, from the buildout of Colossus I and II, which ended with over 1 million H100 GPU equivalents, and the rollout of the Grok 4 Series, Grok Voice, and Grok Imagine, among others. The company also confirmed that work is already underway to train the flagship large language model’s next iteration, Grok 5. 

“Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote. 

Continue Reading