News
SpaceX Starship wraps up nosecone ‘cryo proof’ and first of several Raptor static fires
SpaceX has successfully ‘cryoproofed’ the first fully-assembled Starship prototype’s nose-based propellant tank and used that same tank to fire up a Raptor engine, crossing off one of the last major tests before the rocket’s 15-kilometer (~9.5 mile) launch debut.
On November 4th, after a few false-starts, Starship Serial Number 8 (SN8) kicked off its first round of testing after becoming the first prototype to have a nose section permanently installed. On that Wednesday evening, SpaceX most likely put the rocket through a partial cryogenic proof test explicitly focused on SN8’s new nosecone and a small secondary propellant tank situated in its tip. Designed to act as a secondary reservoir for the relatively small amount of propellant Starships need to land, SN8’s two header tanks were likely loaded with cryogenic liquid nitrogen – a safe, nonreactive stand-in for liquid oxygen and methane.
Having proven that Starship SN8’s newly-installed liquid oxygen header tank and associated plumbing is capable of loading, managing, and offloading dozens of tons of cryogenic fluid while navigating a 40-meter-tall (~130 ft) vertical pipe, SpaceX was ready to move onto the next step: a wet dress rehearsal (WDR) and Raptor static fire.
While SpaceX has technically completed eight successful Raptor static fires on four separate prototypes, including the first three-Raptor static fire ever attempted with Starship SN8, the company has never attempted a static fire while solely drawing propellant from header (landing) tanks. All but essential for Starships to be able to reliably reignite their Raptor engines in flight and keep cryogenic landing propellant liquid for hours, days, weeks, and even months, much smaller header tanks make it easier to keep propellant highly pressurized and in the right place to supply Raptors.
After several days of test windows come and gone and an aborted attempt on November 9th, Starship SN8 finally ignited one of its three Raptor engines, feeding the engine with liquid methane and oxygen stored in two separate header tanks. Oddly, a second or two after startup and ignition, Raptor’s usual exhaust plume was joined by a burst of shiny firework-like debris. A relatively normal five seconds later, the Raptor cut off, though the engine appeared to remain partially on fire for another ten or so seconds – also somewhat unusual.
Ultimately, the observed anomaly could be as simple as debris accidentally left in the vicinity of Raptor’s plume or, while less likely, concrete erosion. There’s also a chance that it was pieces of Raptor’s complex turbopumps or preburners, although it’s also unlikely that the engine would have continued running (as it did) if it had lost that much internal hardware.
(Update: Thankfully, NASASpaceflight.com reporter Michael Baylor says that the cloud of debris observed on November 10th “is not a [Raptor performance] concern,” making pad debris the likely source.)
SpaceX has canceled another static fire window on November 11th, leaving the next opportunity for a second (of three) expected static fire between 9am and 9pm CST (UTC-5) on Thursday, November 13th.
Cybertruck
Tesla reveals its Cybertruck light bar installation fix
Tesla has revealed its Cybertruck light bar installation fix after a recall exposed a serious issue with the accessory.
Tesla and the National Highway Traffic Safety Administration (NHTSA) initiated a recall of 6,197 Cybertrucks back in October to resolve an issue with the Cybertruck light bar accessory. It was an issue with the adhesive that was provided by a Romanian company called Hella Romania S.R.L.
Tesla recalls 6,197 Cybertrucks for light bar adhesive issue
The issue was with the primer quality, as the recall report from the NHTSA had stated the light bar had “inadvertently attached to the windshield using the incorrect surface primer.”
Instead of trying to adhere the light bar to the Cybertruck with an adhesive, Tesla is now going to attach it with a bracketing system, which will physically mount it to the vehicle instead of relying on adhesive strips or glue.
Tesla outlines this in its new Service Bulletin, labeled SB-25-90-001, (spotted by Not a Tesla App) where it shows the light bar will be remounted more securely:


The entire process will take a few hours, but it can be completed by the Mobile Service techs, so if you have a Cybertruck that needs a light bar adjustment, it can be done without taking the vehicle to the Service Center for repair.
However, the repair will only happen if there is no delamination or damage present; then Tesla could “retrofit the service-installed optional off-road light bar accessory with a positive mechanical attachment.”
The company said it would repair the light bar at no charge to customers. The light bar issue was one that did not result in any accidents or injuries, according to the NHTSA’s report.
This was the third recall on Cybertruck this year, as one was highlighted in March for exterior trim panels detaching during operation. Another had to do with front parking lights being too bright, which was fixed with an Over-the-Air update last month.
News
Tesla is already expanding its Rental program aggressively
The program has already launched in a handful of locations, specifically, it has been confined to California for now. However, it does not seem like Tesla has any interest in keeping it restricted to the Golden State.
Tesla is looking to expand its Rental Program aggressively, just weeks after the program was first spotted on its Careers website.
Earlier this month, we reported on Tesla’s intention to launch a crazy new Rental program with cheap daily rates, which would give people in various locations the opportunity to borrow a vehicle in the company’s lineup with some outrageous perks.
Along with the cheap rates that start at about $60 per day, Tesla also provides free Full Self-Driving operation and free Supercharging for the duration of the rental. There are also no limits on mileage or charging, but the terms do not allow the renter to leave the state from which they are renting.
🚨🚨 If you look up details on the Tesla Rental program on Google, you’ll see a bunch of sites saying it’s because of decreasing demand 🤣 pic.twitter.com/WlSQrDJhMg
— TESLARATI (@Teslarati) November 10, 2025
The program has already launched in a handful of locations, specifically, it has been confined to California for now. However, it does not seem like Tesla has any interest in keeping it restricted to the Golden State.
Job postings from Tesla now show it is planning to launch the Rental program in at least three new states: Texas, Tennessee, and Massachusetts.
The jobs specifically are listed as a Rental Readiness Specialist, which lists the following job description:
“The Tesla Rental Program is looking for a Rental Readiness Specialist to work on one of the most progressive vehicle brands in the world. The Rental Readiness Specialist is a key contributor to the Tesla experience by coordinating the receipt of incoming new and used vehicle inventory. This position is responsible for fleet/lot management, movement of vehicles, vehicle readiness, rental invoicing, and customer hand-off. Candidates must have a high level of accountability, and personal satisfaction in doing a great job.”
It also says that those who take the position will have to charge and clean the cars, work with clients on scheduling pickups and drop-offs, and prepare the paperwork necessary to initiate the rental.
The establishment of a Rental program is big for Tesla because it not only gives people the opportunity to experience the vehicles, but it is also a new way to rent a car.
Just as the Tesla purchasing process is more streamlined and more efficient than the traditional car-buying experience, it seems this could be less painful and a new way to borrow a car for a trip instead of using your own.
Elon Musk
Elon Musk’s xAI gains first access to Saudi supercluster with 600k Nvidia GPUs
The facility will deploy roughly 600,000 Nvidia GPUs, making it one of the world’s most notable superclusters.
A Saudi-backed developer is moving forward with one of the world’s largest AI data centers, and Elon Musk’s xAI will be its first customer. The project, unveiled at the U.S.–Saudi Investment Forum in Washington, D.C., is being built by Humain, a company supported by Saudi Arabia’s Public Investment Fund.
The facility will deploy roughly 600,000 Nvidia GPUs, making it one of the world’s most notable superclusters.
xAI secures priority access
Nvidia CEO Jensen Huang stated that the planned data center marks a major leap not just for the region but for the global AI ecosystem as a whole. Huang joked about the sheer capacity of the build, emphasizing how unusual it is for a startup to receive infrastructure of such magnitude. The facility is designed to deliver 500 megawatts of Nvidia GPU power, placing it among the world’s largest AI-focused installations, as noted in a Benzinga report.
“We worked together to get this company started and off the ground and just got an incredible customer with Elon. Could you imagine a startup company, approximately $0 billion in revenues, now going to build a data center for Elon? 500 megawatts is gigantic. This company is off the charts right away,” Huang said.
Global Chipmakers Join Multi-Vendor Buildout To Enhance Compute Diversity
While Nvidia GPUs serve as the backbone of the first phase, Humain is preparing a diversified hardware stack. AMD will supply its Instinct MI450 accelerators, which could draw up to 1 gigawatt of power by 2030 as deployments ramp. Qualcomm will also contribute AI200 and AI250 data center processors, accounting for an additional 200 megawatts of compute capacity. Cisco will support the networking and infrastructure layer, helping knit the multi-chip architecture together.
Apart from confirming that xAI will be the upcoming supercluster’s first customer, Musk also joked about the rapid scaling needed to train increasingly large AI models. He joked that a theoretical expansion one thousand times larger of the upcoming supercluster “would be 8 bazillion, trillion dollars,” highlighting the playful exaggeration he often brings to discussions around extreme compute demand.