News
SpaceX reveals new Starlink satellite details 24 hours from launch
Less than 24 hours before SpaceX’s first dedicated Starlink mission is scheduled to lift off, the company revealed a handful of new details about the design of the 60 satellites cocooned inside Falcon 9’s fairing.
The Falcon 9 booster assigned to launch the Starlink v0.9 mission – B1049 – has already flown twice before in September 2018 and January 2019 and will likely take part in many additional launches prior to retirement. In support of B1049’s hopeful future, drone ship Of Course I Still Love You (OCISLY) arrived at its recovery location on May 13th, an impressive 620 km (385 mi) downrange relative to the launch’s low target orbit (440 km, 270 mi).
(Extra) smallsats
The combination of a distant booster recovery and a low target orbit can only mean one thing: the Starlink v0.9’s satellite payload is extremely heavy. As it just so happens, that is exactly the case per details included in SpaceX’s official press kit (PDF).
“With a flat-panel design featuring multiple high-throughput antennas and a single solar array, each Starlink satellite weighs approximately 227kg, allowing SpaceX to maximize mass production and take full advantage of Falcon 9’s launch capabilities. To adjust position on orbit, maintain intended altitude, and deorbit, Starlink satellites feature Hall thrusters powered by krypton. Designed and built upon the heritage of Dragon, each spacecraft is equipped with a Startracker navigation system that allows SpaceX to point the satellites with precision. Importantly, Starlink satellites are capable of tracking on-orbit debris and autonomously avoiding collisions. Additionally, 95 percent of all components of this design will quickly burn [up] in Earth’s atmosphere at the end of each satellite’s lifecycle—exceeding all current safety standards—with future iterative designs moving to complete disintegration.”

First and foremost, an individual satellite mass of around 227 kg (500 lb) is an impressive achievement, nearly halving the mass of the Tintin A/B prototypes SpaceX launched back in February 2018. For context, OneWeb’s essentially finalized satellite design weighs ~150 kg (330 lb) each and relies on a ~1050 kg (2310 lb) adapter capable of carrying ~30 satellites. Accounting for the adapter, that translates to ~180 kg (400 lb) per OneWeb satellite, around 25% lighter than Starlink v0.9 spacecraft.
However, assuming SpaceX has effectively achieved its desired per-satellite throughput of ~20 gigabits per second (Gbps), Starlink v0.9 could provide more than twice the performance of OneWeb’s satellites (PDF). These are still development satellites, however, and don’t carry the laser interlinks that will be standard on the all future spacecraft, likely increasing their mass an additional ~10%.

Despite the technical unknowns, it can be definitively concluded that SpaceX’s Starlink satellite form factor and packing efficiency are far ahead of anything comparable. Relative to the rockets it competes with, Falcon 9’s fairing is actually on the smaller side, but SpaceX has still managed to fit an incredible 60 fairly high-performance spacecraft inside it with plenty of room to spare. Additionally, SpaceX CEO Elon Musk says that these “flat-panel” Starlink satellites have no real adapter or dispenser, relying instead on their own structure to support the full stack. How each satellite will deploy on orbit is to be determined but it will likely be no less unorthodox than their integrated Borg cube-esque appearance.
That efficiency also means that the Starlink v0.9 is massive. At ~227 kg per satellite, the minimum mass is about 13,800 kg (30,400 lb), easily making it the heaviest payload SpaceX has ever attempted to launch. It’s difficult to exaggerate how ambitious a start this is for the company’s internal satellite development program – Starlink has gone from two rough prototypes to 60 satellites and one of the heaviest communications satellite payloads ever in less than a year and a half.
[Insert Kryptonite joke here]
Beyond their lightweight and space-efficient flat-panel design, the next most notable feature of SpaceX’s Starlink v0.9 satellites is their propulsion system of choice. Not only has SpaceX designed, built, tested, and qualified its own Hall Effect thrusters (HETs) for Starlink, but it has based those thrusters on krypton instead of industry-standard xenon gas propellant.
Based on a cursory review of academic and industry research into the technology, krypton-based Hall effect thrusters can beat xenon’s ISP (chemical efficiency) by 10-15% but produce 15-25% less thrust per a given power input. Additionally, krypton thrusters are also 15-25% less efficient than xenon thrusters, meaning that krypton generally requires significantly more power to match xenon’s thrust. However, the likeliest explanation for SpaceX’s choice of krypton over less exotic options is simple: firm prices are hard to come by for such rare noble gases, but krypton costs at least 5-10 times less than xenon for a given mass.

At the costs SpaceX is targeting ($500k-$1M per satellite), the price of propellant alone (say 25-50 kg) could be a major barrier to satellite affordability – 50 kg of xenon costs at least $100,000, while 50 kg of krypton is more like $10,000-25,000. The more propellant each Starlink satellite can carry, the longer each spacecraft can safely operate, another way to lower the lifetime cost of a satellite megaconstellation.
SpaceX’s dedicated Starlink launch debut is set to lift off no earlier than 10:30pm EDT (02:30 UTC), May 15th. This is not a webcast you want to miss!
Check out Teslarati’s Marketplace! We offer Tesla accessories, including for the Tesla Cybertruck and Tesla Model 3.
Elon Musk
Elon Musk teases crazy outlook for xAI against its competitors
Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.
Elon Musk has never been one to shy away from crazy timelines, massive expectations, and outrageous outlooks. However, his recent plans for xAI and where he believes it will end up compared to its competitors are sure to stimulate conversation.
In a bold and characteristic response on X, Elon Musk fired back at a recent analysis that positioned his AI venture, xAI, as lagging behind industry frontrunners.
The post, from March 14, came as a direct reply to forecaster Peter Wildeford’s assessment, which drew from benchmarks and reporting to rank AI developers.
xAI will catch up this year and then exceed them all by such a long distance in 3 years that you will need the James Webb telescope to see who is in second place
— Elon Musk (@elonmusk) March 14, 2026
Wildeford placed Anthropic, Google, and OpenAI in a virtual tie at the top, with xAI and Meta trailing by about seven months. Chinese players like Moonshot, Deepseek, zAI, and Alibaba were estimated to be nine months behind, while France’s Mistral lagged by about a year and a half.
Musk’s response was vintage hyperbole, designed to rally supporters and dismiss doubters, something his responses on social media often do.
He claimed xAI would “catch up this year,” meaning by the end of 2026, erasing that seven-month deficit against the leaders. But he didn’t stop there.
Musk escalated his vision to 2029, predicting xAI would “exceed them all by such a long distance” that observers would need the James Webb Space Telescope, NASA’s orbiting observatory stationed about 930,000 miles from Earth, to spot whoever lands in second place. This analogy underscores Musk’s confidence in xAI’s trajectory, implying an astronomical lead that could redefine the AI landscape.
Breaking down these claims reveals Musk’s strategic optimism. First, the short-term catch-up: xAI, launched in 2023, has already released models like Grok, but recent benchmarks, including those for Grok 4.2, have shown it falling short in capabilities compared to rivals.
Anthropic’s Claude series, Google’s Gemini, and OpenAI’s GPT models dominate in areas like reasoning, coding, and multimodal tasks. Musk’s assertion suggests aggressive scaling in compute, talent, or architecture, perhaps leveraging xAI’s ties to Tesla’s Dojo supercomputers or Musk’s vast resources, to close the gap swiftly.
The longer-term dominance by 2029 paints an even more audacious picture. Musk envisions xAI not just parity but supremacy, outpacing competitors in innovation speed and model sophistication.
This could involve breakthroughs in energy-efficient training, real-world integration, like Tesla’s robotics, or ethical AI alignment, aligning with Musk’s stated goal of “understanding the universe.”
Critics, however, point to parallels with Tesla’s Full Self-Driving delays; one reply highlighted Musk’s 2023 promise of FSD readiness. Musk has made this promise for many years, and although the system has been strong and improving, it is still a ways off from the completely autonomous operation that was expected by now.
Tesla Full Self-Driving v14.2.2.5 might be the most confusing release ever
Musk’s comment highlights the intensifying U.S.-centric AI race, with xAI challenging the “three-way” dominance noted by Wharton professor Ethan Mollick, whom Wildeford quoted. As geopolitical tensions rise—evident in the Chinese firms’ lag—Musk’s tease could spur investment and talent wars.
Yet, it also invites scrutiny: Will xAI deliver, or is this another telescope-needed mirage? In an industry where timelines slip but stakes soar, Musk’s words keep the spotlight on xAI’s ambitious path forward.
Elon Musk
Tesla Terafab set for launch: Inside the $20B AI chip factory that will reshape the auto industry
Tesla set to launch “Terafab Project: A vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.
Tesla is making one of the boldest bets in its history. On March 14, Elon Musk posted on X that the “Terafab Project launches in 7 days,” pointing to March 21, 2026 as the start date for what he has described as a vertically integrated chip fabrication effort combining logic processing, memory, and advanced packaging.
Tesla first confirmed Terafab on its January 28, 2026 earnings call, where Musk told investors the company needs to build a chip fabrication facility to avoid a supply constraint projected to materialize within three to four years. But the seeds were planted even earlier. At Tesla’s annual general meeting last year, Musk warned that even in the best-case scenario for chip production from their suppliers, it still wouldn’t be enough, and declared that building a “gigantic chip fab” simply had to be done.
While there has been no official announcement on where Tesla plans to break ground on the massive Terafab, all signs point to the North Campus of Giga Texas in Austin.
Months of speculation has surrounded Tesla’s North Campus expansion at Giga Texas, where drone footage captured by observer Joe Tegtmeyer revealed massive construction site preparation just north of the existing factory on a scale that rivals the original Giga Texas footprint itself.
Samsung’s Tesla AI5/AI6 chip factory to start key equipment tests in March: report
The project is projected to produce 100–200 billion AI and memory chips annually, targeting 100,000 wafer starts per month, at an estimated cost of $20 billion. Tesla is targeting 2-nanometre process technology and anticipated to be the most advanced node currently in commercial production. Dubbed the Tesla AI5 chip, the chip will pack 40x–50x more compute performance and 9x more memory than AI4, and will be among the first products Terafab factory is set to produce. This highly optimized, and massively powerful inference chip is designed to make full self-driving (FSD) and Tesla’s Optimus robots faster, safer, and with full autonomy.
This is where Terafab becomes a genuine game-changer. If Tesla successfully builds a 2nm chip fab at scale, it becomes one of only a handful of entities that’s capable of producing AI silicon in-house, with competitive implications that extend far beyond Tesla’s own vehicles, and potentially positioning Tesla as a chip supplier or licensor to other industries.

Credit: @serobinsonjr/X
The next-gen Tesla AI chips will power advancements in Full Self-Driving software, the Cybercab Robotaxi program, and the Optimus humanoid robot line. Musk’s projections for Optimus require chip volumes that no existing external supplier can commit to on Tesla’s timeline.Competitors like Waymo and GM’s Cruise remain dependent on third-party silicon, leaving them exposed to the same supply chain vulnerabilities Tesla is now working to eliminate entirely.
The Terafab launch this week may not mean a factory opens its doors overnight, but it signals Tesla is serious about owning the entire AI stack, from software to silicon.
Elon Musk
What is Digital Optimus? The new Tesla and xAI project explained
At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.
Tesla and xAI announced their groundbreaking joint project, Digital Optimus, also nicknamed “Macrohard” in a humorous jab at Microsoft, earlier this week.
This software-based AI agent is designed to automate complex office workflows by observing and replicating human interactions with computers. As the first major outcome of Tesla’s $2 billion investment in xAI, it represents a powerful fusion of hardware efficiency and advanced reasoning.
At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.
Macrohard or Digital Optimus is a joint xAI-Tesla project, coming as part of Tesla’s investment agreement with xAI.
Grok is the master conductor/navigator with deep understanding of the world to direct digital Optimus, which is processing and actioning the past 5 secs of…
— Elon Musk (@elonmusk) March 11, 2026
Tesla’s specialized AI acts as “System 1”—the fast, instinctive executor—processing the past five seconds of real-time computer screen video along with keyboard and mouse actions to perform immediate tasks.
xAI’s Grok model serves as “System 2,” the strategic “master conductor” or navigator, providing high-level reasoning, world understanding, and directional oversight, much like an advanced turn-by-turn navigation system.
When combined, the two can create a powerful AI-based assistant that can complete everything from accounting work to HR tasks.
Will Tesla join the fold? Predicting a triple merger with SpaceX and xAI
The system runs primarily on Tesla’s low-cost AI4 inference chip, minimizing expensive Nvidia resources from xAI for competitive, real-time performance.
Elon Musk described it as “the only real-time smart AI system” capable, in principle, of emulating the functions of entire companies, handling everything from accounting and HR to repetitive digital operations.
Timelines point to swift deployment. Announced just days ago, Musk expects Digital Optimus to be ready for user experience within about six months, targeting rollout around September 2026.
It will integrate into all AI4-equipped Tesla vehicles, enabling parked cars to handle office work during downtime. Millions of dedicated units are also planned for deployment at Supercharger stations, tapping into roughly 7 gigawatts of available power.
Oh and it works in all AI4-equipped cars, so your car can do office work for you when not driving.
We’re also deploying millions of dedicated Digital Optimus units in the field at Superchargers where we have ~7 gigawatts of available power.
— Elon Musk (@elonmusk) March 12, 2026
Digital Optimus directly supports Tesla’s broader autonomy strategy. It leverages the same end-to-end neural networks, computer vision, and real-time decision-making tech that power Full Self-Driving (FSD) software and the physical Optimus humanoid robot.
By repurposing idle vehicle compute and extending AI4 hardware beyond driving, the project scales Tesla’s autonomy ecosystem from roads to digital workspaces.
As a virtual counterpart to physical Optimus, it divides labor: software agents manage screen-based tasks while humanoid robots tackle physical ones, accelerating Tesla’s vision of general-purpose AI for productivity, Robotaxi fleets, and beyond.
In essence, Digital Optimus bridges Tesla’s vehicle and robotics autonomy with enterprise-scale AI, promising massive efficiency gains. No other company currently matches its real-time capabilities on such accessible hardware.
It really could be one of the most crucial developments Tesla and xAI begin to integrate, as it could revolutionize how people work and travel.
