Connect with us

News

Tesla Roadster and ‘friends’ make history in newly-published log of 57k+ human objects in space

Published

on

When the Tesla Roadster and its Starman occupant entered space aboard Falcon Heavy’s maiden voyage in 2018, it joined the ranks of one astronomer’s impressive database of human-made objects that have left Earth: The General Catalog of Artificial Space Objects (GCAT). It’s the most comprehensive collection of space object data available to the public, and its author recently published it in full for open-source use.

Jonathan McDowell, currently with the Harvard-Smithsonian Center for Astrophysics, created GCAT as an endeavor that began about 40 years go during his Apollo-inspired childhood.

“It was hard for me growing up in England to get details about space because the media there weren’t as interested in it as the U.S. media, so in a slightly obsessive way I started making a list of rocket launches… Now I have the best list,” McDowell told VICE in recently published comments. Lack of information in his younger days seems to have only been the beginning of the challenges the astronomer was willing to take on for his project. As detailed to VICE, McDowell also traveled to international space agency locations to obtain their old rocket lists and even learned Russian to translate that country’s space object data.

Although McDowell has been collecting his Catalog data for decades, the push to finally put all of his work online was inspired by more recent events. The risks of COVID-19 and “imminent death” threatened the database’s purpose. “There’s no point if it dies with me,” he told VICE. Publishing the GCAT had been in his plans, however, the pandemic pushed its priority to the top of McDowell’s personal bucket list.

So, what exactly might one use the GCAT for? McDowell had his own suggestions, including the determination of how many working satellites are currently in space. Since the data is easy to export into software that allows sorting of tab-delimited files, one could perhaps also look at the amount of debris produced over the years to get a general picture for how active spaceflight operations were in the past or how they may be progressing. Plenty of information about each object’s origin and owner is included for this kind of research.

Advertisement
-->

One of the GCAT data sets tracks failed objects that would have otherwise made it to orbit. As an example, looking at the number of items from failed launch attempts in 1958 (52) gives a hint as to how intense the space race between the US and the Soviet Union was at the time. Data browsing could be used for general historical inquiry as well. For instance, Sputnik 1, launched by the Soviet Union on October 4, 1957, is object 00001; the Eagle lander still on the Moon from Apollo 11’s mission is object #04041; and the Tesla Roadster is object #43205.

Some of the data can inspire more historical awareness such as the listing of tools lost during on-orbit construction of the Soviets’ Mir Space Station in 1986. Of course, reminders of significant spaceflight misfortunes are also included like the Challenger Space Shuttle explosion in 1986 and SpaceX’s CRS-7 ISS resupply mission failure in 2015.

Since GCAT is inclusive of both functional items and notorious bits of space junk logged from decades of data digging, the Tesla Roadster and its 57,000+ “friends” are poised to help with some serious research now and in the far future.

“My audience is the historian 1,000 years from now,” McDowell explained. “I’m imagining that 1,000 years from now there will be more people living off Earth than on, and that they will look back to this moment in history as critically important.” For fans of Star Trek, this type of record keeping certainly seems to be relevant to future humans more often than not (away mission, anyone?). Perhaps that type of science fiction storyline will transpire into reality, just as so many of SpaceX’s achievements have done already.

Interestingly enough, McDowell is working on another project to track deep space objects beyond Earth’s orbit. Will space debris take center stage around Mars and beyond like it does around our own planet? Seeing the progress in one comprehensive database will certainly be an interesting way to show just how far humans have come since object #00001.

Advertisement
-->

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

Elon Musk

Tesla confirms that work on Dojo 3 has officially resumed

“Now that the AI5 chip design is in good shape, Tesla will restart work on Dojo 3,” Elon Musk wrote in a post on X.

Published

on

(Credit: Tesla)

Tesla has restarted work on its Dojo 3 initiative, its in-house AI training supercomputer, now that its AI5 chip design has reached a stable stage. 

Tesla CEO Elon Musk confirmed the update in a recent post on X.

Tesla’s Dojo 3 initiative restarted

In a post on X, Musk said that with the AI5 chip design now “in good shape,” Tesla will resume work on Dojo 3. He added that Tesla is hiring engineers interested in working on what he expects will become the highest-volume AI chips in the world.

“Now that the AI5 chip design is in good shape, Tesla will restart work on Dojo3. If you’re interested in working on what will be the highest volume chips in the world, send a note to AI_Chips@Tesla.com with 3 bullet points on the toughest technical problems you’ve solved,” Musk wrote in his post on X. 

Musk’s comment followed a series of recent posts outlining Tesla’s broader AI chip roadmap. In another update, he stated that Tesla’s AI4 chip alone would achieve self-driving safety levels well above human drivers, AI5 would make vehicles “almost perfect” while significantly enhancing Optimus, and AI6 would be focused on Optimus and data center applications. 

Advertisement
-->

Musk then highlighted that AI7/Dojo 3 will be designed to support space-based AI compute.

Tesla’s AI roadmap

Musk’s latest comments helped resolve some confusion that emerged last year about Project Dojo’s future. At the time, Musk stated on X that Tesla was stepping back from Dojo because it did not make sense to split resources across multiple AI chip architectures. 

He suggested that clustering large numbers of Tesla AI5 and AI6 chips for training could effectively serve the same purpose as a dedicated Dojo successor. “In a supercomputer cluster, it would make sense to put many AI5/AI6 chips on a board, whether for inference or training, simply to reduce network cabling complexity & cost by a few orders of magnitude,” Musk wrote at the time.

Musk later reinforced that idea by responding positively to an X post stating that Tesla’s AI6 chip would effectively be the new Dojo. Considering his recent updates on X, however, it appears that Tesla will be using AI7, not AI6, as its dedicated Dojo successor. The CEO did state that Tesla’s AI7, AI8, and AI9 chips will be developed in short, nine-month cycles, so Dojo’s deployment might actually be sooner than expected. 

Advertisement
-->
Continue Reading

Elon Musk

Elon Musk’s xAI brings 1GW Colossus 2 AI training cluster online

Elon Musk shared his update in a recent post on social media platform X.

Published

on

xAI-supercomputer-memphis-environment-pushback
Credit: xAI

xAI has brought its Colossus 2 supercomputer online, making it the first gigawatt-scale AI training cluster in the world, and it’s about to get even bigger in a few months.

Elon Musk shared his update in a recent post on social media platform X.

Colossus 2 goes live

The Colossus 2 supercomputer, together with its predecessor, Colossus 1, are used by xAI to primarily train and refine the company’s Grok large language model. In a post on X, Musk stated that Colossus 2 is already operational, making it the first gigawatt training cluster in the world. 

But what’s even more remarkable is that it would be upgraded to 1.5 GW of power in April. Even in its current iteration, however, the Colossus 2 supercomputer already exceeds the peak demand of San Francisco.  

Commentary from users of the social media platform highlighted the speed of execution behind the project. Colossus 1 went from site preparation to full operation in 122 days, while Colossus 2 went live by crossing the 1-GW barrier and is targeting a total capacity of roughly 2 GW. This far exceeds the speed of xAI’s primary rivals.

Advertisement
-->

Funding fuels rapid expansion

xAI’s Colossus 2 launch follows xAI’s recently closed, upsized $20 billion Series E funding round, which exceeded its initial $15 billion target. The company said the capital will be used to accelerate infrastructure scaling and AI product development.

The round attracted a broad group of investors, including Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX, and Baron Capital Group. Strategic partners NVIDIA and Cisco also continued their support, helping xAI build what it describes as the world’s largest GPU clusters.

xAI said the funding will accelerate its infrastructure buildout, enable rapid deployment of AI products to billions of users, and support research tied to its mission of understanding the universe. The company noted that its Colossus 1 and 2 systems now represent more than one million H100 GPU equivalents, alongside recent releases including the Grok 4 series, Grok Voice, and Grok Imagine. Training is also already underway for its next flagship model, Grok 5.

Continue Reading

Elon Musk

Tesla AI5 chip nears completion, Elon Musk teases 9-month development cadence

The Tesla CEO shared his recent insights in a post on social media platform X.

Published

on

Credit: Tesla

Tesla’s next-generation AI5 chip is nearly complete, and work on its successor is already underway, as per a recent update from Elon Musk. 

The Tesla CEO shared his recent insights in a post on social media platform X.

Musk details AI chip roadmap

In his post, Elon Musk stated that Tesla’s AI5 chip design is “almost done,” while AI6 has already entered early development. Musk added that Tesla plans to continue iterating rapidly, with AI7, AI8, AI9, and future generations targeting a nine-month design cycle. 

He also noted that Tesla’s in-house chips could become the highest-volume AI processors in the world. Musk framed his update as a recruiting message, encouraging engineers to join Tesla’s AI and chip development teams.

Tesla community member Herbert Ong highlighted the strategic importance of the timeline, noting that faster chip cycles enable quicker learning, faster iteration, and a compounding advantage in AI and autonomy that becomes increasingly difficult for competitors to close.

Advertisement
-->

AI5 manufacturing takes shape

Musk’s comments align with earlier reporting on AI5’s production plans. In December, it was reported that Samsung is preparing to manufacture Tesla’s AI5 chip, accelerating hiring for experienced engineers to support U.S. production and address complex foundry challenges.

Samsung is one of two suppliers selected for AI5, alongside TSMC. The companies are expected to produce different versions of the AI5 chip, with TSMC reportedly using a 3nm process and Samsung using a 2nm process.

Musk has previously stated that while different foundries translate chip designs into physical silicon in different ways, the goal is for both versions of the Tesla AI5 chip to operate identically. AI5 will succeed Tesla’s current AI4 hardware, formerly known as Hardware 4, and is expected to support the company’s Full Self-Driving system as well as other AI-driven efforts, including Optimus.

Continue Reading