News
Musk’s OpenAI will train artificial intelligence through video game ‘Universe’
Elon Musk’s OpenAI will introduce Universe, a virtual training ground aimed at teaching AI to play video games, use apps and even interact with websites. OpenAI, the artificial intelligence research company backed by the Tesla founder and billionaire entrepreneur, defines Universe in a blog post as “a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications.”
Put simply, Universe will provide a gym that allows AI agents to go beyond their specialized knowledge of an individual environment to something approaching common sense. “Any task a human can complete with a computer.” Using a VNC (Virtual Network Computing) remote desktop, it allows the AI to control the game or app using a virtual keyboard and mouse, and to see its output by analyzing the pixels displayed on the screen. It’s essentially an interface to the company’s Gym toolkit for developing reinforcement algorithms, a type of machine learning system.
“Our goal is to develop a single AI agent that can flexibly apply its past experience on Universe environments to quickly master unfamiliar, difficult environments, which would be a major step towards general intelligence,” OpenAI says. As an example, it points to success of Google’s DeepMind AlphaGo initiative, which defeated the world champion human Go player earlier this year. While that success was impressive, when faced with a different challenge, the agent would have to go back to square one and learn the new environment through millions of trial and error steps.
OpenAI hopes to expand the Reward Learning (RL) lessons learned in one environment so that an AI agent can build upon past experience to succeed in unfamiliar environments.
We're releasing Universe, a platform for measuring and training AI agents: https://t.co/bx7OjMDaJK
— OpenAI (@OpenAI) December 5, 2016
OpenAI says in its blog post, “Systems with general problem solving ability — something akin to human common sense, allowing an agent to rapidly solve a new hard task — remain out of reach. One apparent challenge is that our agents don’t carry their experience along with them to new tasks. In a standard training regime, we initialize agents from scratch and let them twitch randomly through tens of millions of trials as they learn to repeat actions that happen to lead to rewarding outcomes. If we are to make progress towards generally intelligent agents, we must allow them to experience a wide repertoire of tasks so they can develop world knowledge and problem solving strategies that can be efficiently reused in a new task.”
Prior to Universe, the largest RL resource consisted of 55 Atari games — the Atari Learning Environment, says The Register. But Universe will begin with the largest library of games and resources ever assembled. “Out of the box, Universe comprises thousands of games (e.g. Flash games, slither.io, Starcraft), browser-based tasks (e.g. form filling), and applications (e.g. fold.it),” the OpenAI blog claims. Gaming companies that are cooperating with OpenAI include Flash, Microsoft – OpenAI announced a strategic partnership with the Redmond-based software giant – EA, Valve, Nvidia, Zachtronics, Wolfram, and others.
Universe is about more than gaming. It’s main focus is on training AI agents to complete common online tasks with speed and accuracy. “Today, our agents are mostly learning to interact with common user interface elements like buttons, lists and sliders, but in the future they could complete complex tasks, such as looking up things they don’t know on the internet, managing your email or calendar, completing Khan Academy lessons, or working on Amazon Mechanical Turk and CrowdFlower tasks.”
The OpenAI blog post introducing Universe gives a long and detailed accounting of how Universe was created and what it hopes to accomplish. At the end, it provides a number of ways that companies and individuals can contribute to the process. It’s fascinating reading for anyone interested in what the future of computing is likely to be.
There is also a darker side to artificial intelligence, which Elon Musk refers to as “summoning the Devil.” As The Register suggests, “While making software smarter may appeal to researchers, society as a whole appears to be increasingly unnerved by the prospect. Beyond the speculative fears about malevolent AI and more realistic concerns about the automation of military weaponry, companies and individuals already have trouble dealing with automated forms of interaction.”
One area of concern is that AI agents may one day be able to reactivate themselves after being shut down by human controllers. What was once the stuff of science fiction such as Minority Report and I, Robot could one day become all too real.
OpenAI Universe has been open-sourced on Github for those that may be interested in testing their own video game bot. We’ve included a video below showing OpenAI in action.
Elon Musk
Elon Musk’s xAI brings 1GW Colossus 2 AI training cluster online
Elon Musk shared his update in a recent post on social media platform X.
xAI has brought its Colossus 2 supercomputer online, making it the first gigawatt-scale AI training cluster in the world, and it’s about to get even bigger in a few months.
Elon Musk shared his update in a recent post on social media platform X.
Colossus 2 goes live
The Colossus 2 supercomputer, together with its predecessor, Colossus 1, are used by xAI to primarily train and refine the company’s Grok large language model. In a post on X, Musk stated that Colossus 2 is already operational, making it the first gigawatt training cluster in the world.
But what’s even more remarkable is that it would be upgraded to 1.5 GW of power in April. Even in its current iteration, however, the Colossus 2 supercomputer already exceeds the peak demand of San Francisco.
Commentary from users of the social media platform highlighted the speed of execution behind the project. Colossus 1 went from site preparation to full operation in 122 days, while Colossus 2 went live by crossing the 1-GW barrier and is targeting a total capacity of roughly 2 GW. This far exceeds the speed of xAI’s primary rivals.
Funding fuels rapid expansion
xAI’s Colossus 2 launch follows xAI’s recently closed, upsized $20 billion Series E funding round, which exceeded its initial $15 billion target. The company said the capital will be used to accelerate infrastructure scaling and AI product development.
The round attracted a broad group of investors, including Valor Equity Partners, Stepstone Group, Fidelity Management & Research Company, Qatar Investment Authority, MGX, and Baron Capital Group. Strategic partners NVIDIA and Cisco also continued their support, helping xAI build what it describes as the world’s largest GPU clusters.
xAI said the funding will accelerate its infrastructure buildout, enable rapid deployment of AI products to billions of users, and support research tied to its mission of understanding the universe. The company noted that its Colossus 1 and 2 systems now represent more than one million H100 GPU equivalents, alongside recent releases including the Grok 4 series, Grok Voice, and Grok Imagine. Training is also already underway for its next flagship model, Grok 5.
Elon Musk
Tesla AI5 chip nears completion, Elon Musk teases 9-month development cadence
The Tesla CEO shared his recent insights in a post on social media platform X.
Tesla’s next-generation AI5 chip is nearly complete, and work on its successor is already underway, as per a recent update from Elon Musk.
The Tesla CEO shared his recent insights in a post on social media platform X.
Musk details AI chip roadmap
In his post, Elon Musk stated that Tesla’s AI5 chip design is “almost done,” while AI6 has already entered early development. Musk added that Tesla plans to continue iterating rapidly, with AI7, AI8, AI9, and future generations targeting a nine-month design cycle.
He also noted that Tesla’s in-house chips could become the highest-volume AI processors in the world. Musk framed his update as a recruiting message, encouraging engineers to join Tesla’s AI and chip development teams.
Tesla community member Herbert Ong highlighted the strategic importance of the timeline, noting that faster chip cycles enable quicker learning, faster iteration, and a compounding advantage in AI and autonomy that becomes increasingly difficult for competitors to close.
AI5 manufacturing takes shape
Musk’s comments align with earlier reporting on AI5’s production plans. In December, it was reported that Samsung is preparing to manufacture Tesla’s AI5 chip, accelerating hiring for experienced engineers to support U.S. production and address complex foundry challenges.
Samsung is one of two suppliers selected for AI5, alongside TSMC. The companies are expected to produce different versions of the AI5 chip, with TSMC reportedly using a 3nm process and Samsung using a 2nm process.
Musk has previously stated that while different foundries translate chip designs into physical silicon in different ways, the goal is for both versions of the Tesla AI5 chip to operate identically. AI5 will succeed Tesla’s current AI4 hardware, formerly known as Hardware 4, and is expected to support the company’s Full Self-Driving system as well as other AI-driven efforts, including Optimus.
News
Tesla Model Y and Model 3 named safest vehicles tested by ANCAP in 2025
According to ANCAP in a press release, the Tesla Model Y achieved the highest overall weighted score of any vehicle assessed in 2025.
The Tesla Model Y recorded the highest overall safety score of any vehicle tested by ANCAP in 2025. The Tesla Model 3 also delivered strong results, reinforcing the automaker’s safety leadership in Australia and New Zealand.
According to ANCAP in a press release, the Tesla Model Y achieved the highest overall weighted score of any vehicle assessed in 2025. ANCAP’s 2025 tests evaluated vehicles across four key pillars: Adult Occupant Protection, Child Occupant Protection, Vulnerable Road User Protection, and Safety Assist technologies.
The Model Y posted consistently strong results in all four categories, distinguishing itself through a system-based safety approach that combines structural crash protection with advanced driver-assistance features such as autonomous emergency braking, lane support, and driver monitoring.

This marked the second time the Model Y has topped ANCAP’s annual safety rankings. The Model Y’s previous version was also ANCAP’s top performer in 2022.
The Tesla Model 3 also delivered a strong performance in ANCAP’s 2025 tests, contributing to Tesla’s broader safety presence across segments. Similar to the Model Y, the Model 3 also earned impressive scores across the ANCAP’s four pillars. This made the vehicle the top performer in the Medium Car category.
ANCAP Chief Executive Officer Carla Hoorweg stated that the results highlight a growing industry shift toward integrated safety design, with improvements in technologies such as autonomous emergency braking and lane support translating into meaningful real-world protection.
“ANCAP’s testing continues to reinforce a clear message: the safest vehicles are those designed with safety as a system, not a checklist. The top performers this year delivered consistent results across physical crash protection, crash avoidance and vulnerable road user safety, rather than relying on strength in a single area.
“We are also seeing increasing alignment between ANCAP’s test requirements and the safety technologies that genuinely matter on Australian and New Zealand roads. Improvements in autonomous emergency braking, lane support, and driver monitoring systems are translating into more robust protection,” Hoorweg said.