News
SpaceX urges Congress to expedite commercial spaceflight regulation reforms
Speaking in a Congressional hearing on the morning of June 26th, SpaceX Director of Government Affairs Caryn Schenewerk reaffirmed the company’s commitment to conducting “more than 25 [launches]” in 2018, a feat that will require a ~50% increase in launch frequency over the second half of the year.
Related to the focus of this particular hearing, namely regulatory reform, Representative Rick Larsen (WA-2) appeared to speak for everyone when he mirrored the four panelists’ sense of urgency for beginning the process of reforming federal space launch regulations by asking for an informal meeting outside the doors of the chamber once the session concluded, stating that “it’s that urgent.” In order for companies like SpaceX (and eventually Blue Origin) to be able to sustainably and reliably reach cadences of one launch per week in the near future, the currently cumbersome and dated launch licensing apparatus will almost invariably require significant reforms.
Pressure to remove artificial bottlenecks growing
Two primary problems were identified by the Air Line Pilots Association (ALPA), ULA, Blue Origin, and SpaceX officials present before the Congressional committee: the extreme sluggishness of licensing and the similarly obtuse brute-force integration of launch vehicle operations with the federal systems of air traffic control tasked with safely orchestrating tens of thousands of aircraft flights daily.
Whereas nominal orbital rocket launches result in vehicles like SpaceX’s Falcon 9 spending less than 90 seconds of real time within the bounds of that controlled airspace, the massive and disruptive “keep-out zones” currently required by the FAA for rocket launches frequently disrupt air traffic for more than 100 times as long. According to Ms. Schenewerk, SpaceX believes it already possesses the capabilities to integrate live Falcon 9 and Heavy telemetry with air traffic control, allowing those keep out zones to be dramatically compressed and highly responsive to actual launch operations, similar to how aircraft traffic is dealt with today.
- Falcon 9 1046’s Block 5 upper stage shown on its May 11 debut launch with Bangabandhu-1. SpaceX’s rockets already provide rich telemetry live to the company’s launch controllers. (SpaceX)
- After CRS-15, all orbital launches will be use Block 5 boosters and upper stages. The upgraded rocket’s next launch is NET July 20. (Tom Cross)
On the specific launch licensing side of this regulatory coin, SpaceX, Blue Origin, and ULA all expressed distaste for current standards, in which a worst-case scenario could see a launch provider forced to wait more than 200 days (up to eight full months) from the moment of filing to a launch license grant. Worse, even slight adjustments to a granted launch license require launch providers to resubmit themselves to that 200+ day process, effectively making timely modifications undependable exceptions to the rule.
Old rules, new rockets
The real barrier to these common-sense regulatory reforms is quite simply the extraordinary sluggishness of the FAA and those tasked with updating its guidelines and regulatory structures. Rep. Larsen was not exaggerating when he stated that he foresaw Congress choosing to delay those reforms by another 5+ years if given the opportunity, and it was thus likely a relief for the panel of witnesses (PDF) to hear him agree that these reforms must be pursued with the utmost urgency. In its current state, the FAA’s launch licensing is liable to be utterly swamped by the imminent introduction of multiple new smallsat launch providers on top of the already lofty launch cadence ambitions of SpaceX, ULA, and Blue Origin, as well as Orbital ATK to a lesser extent.
With SpaceX leading the charge, the American launch industry is already a year or more into a true renaissance of American spaceflight, and the FAA is simply not equipped to handle it. If reforms can be completed with haste rarely seen in Congress, the federal government can at a minimum ensure that it does not become a wholly artificial and preventable bottleneck for that explosion of domestic spaceflight activity.
- SpaceX’s Demo Mission-1 Crew Dragon seen preparing for vacuum tests at a NASA-run facility, June 2018. (SpaceX)
- A Falcon 9 fairing during encapsulation, when a launch payload is sealed inside the fairing’s two halves. This small satellite is NASA’s TESS, launched in April 2018. (NASA)
- A combination of scientific satellites and five Iridium NEXT communications satellites preparing for launch in May 2018. (NASA)
- Telesat’s SSL-built Telstar 19V conducts testing in an anechoic chamber before launch, currently NET July 19. (SSL)
Speaking of that activity, SpaceX is scheduled to begin its H2 2018 manifest push with as many as six Falcon 9 launches (five with Block 5 boosters) over the next ~60 days. Barring an abrupt increase in rocket booster production speeds, sources have confirmed that those 2-3 summer months will likely also feature one of the first rapid Falcon 9 Block 5 reuses, potentially seeing one of SpaceX’s highly-reusable rockets complete two orbital launches in approximately one month (30-50 days). That will, of course, depend upon both customer agreeability and the availability of rockets and launch facilities, but the goal of a rapid Block 5 reuse before summer’s end still stands, at least for now.
Up next is CRS-15, which will see the last orbital Block 4 Falcon 9 launch a flight-proven Cargo Dragon to the ISS with several thousand pounds of supplies in tow, with liftoff scheduled for NET 5:42 am EDT, June 29.
Follow us for live updates, peeks behind the scenes, and photos from Teslarati’s East and West Coast photographers.
Teslarati – Instagram – Twitter
Tom Cross – Twitter
Pauline Acalin – Twitter
Eric Ralph – Twitter
News
Nvidia CEO Jensen Huang explains difference between Tesla FSD and Alpamayo
“Tesla’s FSD stack is completely world-class,” the Nvidia CEO said.
NVIDIA CEO Jensen Huang has offered high praise for Tesla’s Full Self-Driving (FSD) system during a Q&A at CES 2026, calling it “world-class” and “state-of-the-art” in design, training, and performance.
More importantly, he also shared some insights about the key differences between FSD and Nvidia’s recently announced Alpamayo system.
Jensen Huang’s praise for Tesla FSD
Nvidia made headlines at CES following its announcement of Alpamayo, which uses artificial intelligence to accelerate the development of autonomous driving solutions. Due to its focus on AI, many started speculating that Alpamayo would be a direct rival to FSD. This was somewhat addressed by Elon Musk, who predicted that “they will find that it’s easy to get to 99% and then super hard to solve the long tail of the distribution.”
During his Q&A, Nvidia CEO Jensen Huang was asked about the difference between FSD and Alpamayo. His response was extensive:
“Tesla’s FSD stack is completely world-class. They’ve been working on it for quite some time. It’s world-class not only in the number of miles it’s accumulated, but in the way it’s designed, the way they do training, data collection, curation, synthetic data generation, and all of their simulation technologies.
“Of course, the latest generation is end-to-end Full Self-Driving—meaning it’s one large model trained end to end. And so… Elon’s AD system is, in every way, 100% state-of-the-art. I’m really quite impressed by the technology. I have it, and I drive it in our house, and it works incredibly well,” the Nvidia CEO said.
Nvidia’s platform approach vs Tesla’s integration
Huang also stated that Nvidia’s Alpamayo system was built around a fundamentally different philosophy from Tesla’s. Rather than developing self-driving cars itself, Nvidia supplies the full autonomous technology stack for other companies to use.
“Nvidia doesn’t build self-driving cars. We build the full stack so others can,” Huang said, explaining that Nvidia provides separate systems for training, simulation, and in-vehicle computing, all supported by shared software.
He added that customers can adopt as much or as little of the platform as they need, noting that Nvidia works across the industry, including with Tesla on training systems and companies like Waymo, XPeng, and Nuro on vehicle computing.
“So our system is really quite pervasive because we’re a technology platform provider. That’s the primary difference. There’s no question in our mind that, of the billion cars on the road today, in another 10 years’ time, hundreds of millions of them will have great autonomous capability. This is likely one of the largest, fastest-growing technology industries over the next decade.”
He also emphasized Nvidia’s open approach, saying the company open-sources its models and helps partners train their own systems. “We’re not a self-driving car company. We’re enabling the autonomous industry,” Huang said.
Elon Musk
Elon Musk confirms xAI’s purchase of five 380 MW natural gas turbines
The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.
xAI, Elon Musk’s artificial intelligence startup, has purchased five additional 380 MW natural gas turbines from South Korea’s Doosan Enerbility to power its growing supercomputer clusters.
The deal, which was confirmed by Musk on X, highlights xAI’s effort to aggressively scale its operations.
xAI’s turbine deal details
News of xAI’s new turbines was shared on social media platform X, with user @SemiAnalysis_ stating that the turbines were produced by South Korea’s Doosan Enerbility. As noted in an Asian Business Daily report, Doosan Enerbility announced last October that it signed a contract to supply two 380 MW gas turbines for a major U.S. tech company. Doosan later noted in December that it secured an order for three more 380 MW gas turbines.
As per the X user, the gas turbines would power an additional 600,000+ GB200 NVL72 equivalent size cluster. This should make xAI’s facilities among the largest in the world. In a reply, Elon Musk confirmed that xAI did purchase the turbines. “True,” Musk wrote in a post on X.
xAI’s ambitions
Recent reports have indicated that xAI closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development. The funding, as per the AI startup, “will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products.”
The company also teased the rollout of its upcoming frontier AI model. “Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote in a post on its website.
Elon Musk
Elon Musk’s xAI closes upsized $20B Series E funding round
xAI announced the investment round in a post on its official website.
xAI has closed an upsized $20 billion Series E funding round, exceeding the initial $15 billion target to fuel rapid infrastructure scaling and AI product development.
xAI announced the investment round in a post on its official website.
A $20 billion Series E round
As noted by the artificial intelligence startup in its post, the Series E funding round attracted a diverse group of investors, including Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX, and Baron Capital Group, among others.
Strategic partners NVIDIA and Cisco Investments also continued support for building the world’s largest GPU clusters.
As xAI stated, “This financing will accelerate our world-leading infrastructure buildout, enable the rapid development and deployment of transformative AI products reaching billions of users, and fuel groundbreaking research advancing xAI’s core mission: Understanding the Universe.”
xAI’s core mission
Th Series E funding builds on xAI’s previous rounds, powering Grok advancements and massive compute expansions like the Memphis supercluster. The upsized demand reflects growing recognition of xAI’s potential in frontier AI.
xAI also highlighted several of its breakthroughs in 2025, from the buildout of Colossus I and II, which ended with over 1 million H100 GPU equivalents, and the rollout of the Grok 4 Series, Grok Voice, and Grok Imagine, among others. The company also confirmed that work is already underway to train the flagship large language model’s next iteration, Grok 5.
“Looking ahead, Grok 5 is currently in training, and we are focused on launching innovative new consumer and enterprise products that harness the power of Grok, Colossus, and 𝕏 to transform how we live, work, and play,” xAI wrote.





