Microsoft’s interest in expanding its Azure cloud computing service to include artificial intelligence (AI) supercomputing technologies has led to a new partnership agreement with the Elon Musk-backed company, OpenAI. An investment of $1 billion dollars was recently made by Microsoft into the venture to develop an Azure-based hardware and software platform that will scale to artificial general intelligence (AGI). In turn, OpenAI will use Microsoft as their exclusive cloud provider.
OpenAI is a nonprofit AI research organization co-founded by Musk, serial entrepreneur Peter Thiel, and Y Combinator’s Sam Altman with the goal of developing beneficial, open source AI to combat any future rise of harmful AI. Musk stepped down from the Board of Directors in early 2018 to avoid any conflicts with Tesla’s Autopilot program; however, he still remains as a benefactor and advisor. Tesla’s Director of AI and Autopilot Vision, Andrej Karpathy, previously worked as a neural network researcher for OpenAI.
While the venture is backed by significant private investment, the long-term goals of OpenAI require even greater resources. The company’s motivation to create the new investment partnership with Microsoft was partially due to financial constraints caused by computing hardware needs. The financial requirements to retain top talent are also significant – OpenAI’s tax filings from 2016 revealed its top researcher was paid a $1.9 million dollar salary, with others receiving significant amounts as well.

“OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them,” OpenAI’s press release announcing the new partnership explained.
The connection between Microsoft and OpenAI is not new. In 2016, the companies jointly announced they were working together to run most of OpenAI’s large-scale experiments on Azure, making it their primary cloud platform for deep learning and AI. Azure had hardware configurations optimized for AI computing needs and a roadmap to expand those capabilities even further. One of the stated joint goals between Microsoft and OpenAI is the democratization of AI, and cloud computing is a large part of making that a reality as hardware and software resources are no longer required to be local to the user.
OpenAI has already created some impressive AI capabilities. In August last year, company bots created for the video game Dota 2 defeated a team of highly skilled human players in two games out of three. To accomplish the task, serious amounts of hardware and training were required. The nonprofit research lab employed a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 cores to complete roughly 180 years worth of gameplay every day through reinforcement learning, which allowed the bots to develop advanced skills for the game. An open source gym for training AI with games was also released by the company.
In 2017, OpenAI announced that it had successfully trained its AI-powered robots to perform a task after watching it once in virtual reality. After showing a robot how to stack a series of colored blocks in a virtual reality simulation, it was then able to successfully mimic the actions. To accomplish this, OpenAI trained the robot in a simulated, virtual environment with nuances like lighting, shadows and backgrounds noise so that when in the real environment, it knew to filter out noise and focus on only important elements as a human brain would.
OpenAI also successfully taught AI bots to create their own language for communicating with each other in 2017. A paper was published on the topic which explained how the bots used reinforcement learning to accomplish simple goals through trial and error. After being given clues such as “Go to” or “Look at” by the researchers, the bots were then required to create their own machine language to communicate with each other.
The company’s latest commitment to Microsoft will now expand their access to resources to achieve even more impressive artificial intelligence feats.
News
Tesla pulls back the curtain on Cybercab mass production
Tesla’s Cybercab drives itself off the Gigafactory Texas line in a striking new production video.
Tesla has provided a first look from inside a production Cybercab as it drove itself off the assembly line at Gigafactory Texas. The video footage, posted on X, opens on the factory floor with robotic arms and assembly equipment visible through the Cybercab windshield, and follows the car through a branded tunnel marked “Cybercab”, before autonomously navigating itself to a holding lot.
The first Cybercab rolled off the Giga Texas production line on February 17, 2026, with Musk writing on X, “Congratulations to the Tesla team on making the first production Cybercab.” April marked the official shift to volume production. The Giga Texas line is being prepared to produce hundreds of units per week, with 60 units already spotted on the Gigafactory campus earlier this month.
Purpose-built for autonomy
Cybercab in production now at Giga Texas pic.twitter.com/Y9qG3KyWBa
— Tesla (@Tesla) April 23, 2026
The Cybercab was first revealed publicly at Tesla’s “We, Robot” event in October 2024 at Warner Bros. Studios in Burbank, California, where 20 pre-production units gave attendees rides around the studio lot. Musk said he believed the average operating cost would be around $0.20 per mile, and that buyers would be able to purchase one for under $30,000. The two-seat design is deliberate. Musk noted that 90 percent of miles driven involve one or two people, making a compact two-passenger vehicle the most efficient configuration for a fleet-scale robotaxi. Eliminating rear seats also removes complexity and cost, supporting that sub-$30,000 target.
Tesla’s annual production goal is 2 million Cybercabs per year once several factories reach full design capacity. The Cybercab has no steering wheel, no pedals, and relies entirely on Tesla’s vision-based FSD system. What the video shows is the first evidence of that system working not as a demo, but as a production reality, driving itself off the line and into the world.
🚗 Our first ride in Tesla Cybercab last October: pic.twitter.com/kGqIqgJPRn https://t.co/BITCXFhbVd
— TESLARATI (@Teslarati) April 22, 2025
Elon Musk
Elon Musk talks Tesla Roadster’s future
Elon Musk confirmed the Roadster as Tesla’s last manually driven car, with a debut coming soon.
During Tesla’s Q1 2026 earnings call on April 22, Elon Musk made a brief but notable comment about the long-awaited next generation Roadster while describing Tesla’s future vehicle lineup. “Long term, the only manually driven car will be the new Tesla Roadster,” he said. “Speaking of which, we may be able to debut that in a month or so. It requires a lot of testing and validation before we can actually have a demo and not have something go wrong with the demo.”
That single statement is the entire Roadster update from yesterday’s call, and while it represents another timeline shift, it comes as no surprise with Tesla heads-down-at-work on the mass rollout of its Robotaxi service across US cities, and the industrial scale production of the humanoid Optimus.
The fact that Musk specifically framed the Roadster as the last manually driven Tesla is significant on its own. As the rest of the lineup moves toward full autonomy, the Roadster becomes something rare in the Tesla-sphere by keeping the driver in control. Driving enthusiasts who buy a $200,000 supercar are not doing so to be passengers. They want the physical connection to the road, the feel of acceleration under their own input, and the experience of controlling something with that level of performance. FSD, however capable it becomes, removes that entirely. The Roadster signals that Tesla understands this distinction and is building a car specifically for the people who consider driving itself the point.
Tesla isn’t joking about building Optimus at an industrial scale: Here we go
The specs for the Roadster Musk has teased over the years are genuinely unlike anything in production. The base model targets 0 to 60 mph in 1.9 seconds, a top speed above 250 mph, and up to 620 miles of range from a 200 kWh battery. The optional SpaceX package takes it further, rumored to add roughly ten cold gas thrusters operating at 10,000 psi, borrowed directly from Falcon 9 rocket technology. With thrusters, Musk has claimed 0 to 60 mph in as little as 1.1 seconds. In a 2021 Joe Rogan interview he went further, stating “I want it to hover. We got to figure out how to make it hover without killing people.” Tesla filed a patent for ground effect technology in August 2025, suggesting the hover concept has not been abandoned. The starting price remains $200,000, with the Founders Series requiring a $250,000 full deposit. Some reservation holders placed those deposits in 2017 and are approaching a full decade of waiting.
With production now targeted for 2027 or 2028 at the earliest, the Roadster remains Tesla’s most audacious promise and its longest-running delay. But if what Musk is testing lives up to even half of what he has described, the demo alone should be worth waiting for.
Elon Musk says the Tesla Roadster unveiling could be done “maybe in a month or so.”
He said it should be an extraordinary unveiling event. pic.twitter.com/6V9P7zmvEm
— TESLARATI (@Teslarati) April 22, 2026
Elon Musk
Tesla confirmed HW3 can’t do Unsupervised FSD but there’s more to the story
Tesla confirmed HW3 vehicles cannot run unsupervised FSD, replacing its free upgrade promise with a discounted trade-in.
Tesla has officially confirmed that early vehicles with its Autopilot Hardware 3 (HW3) will not be capable of unsupervised Full Self-Driving, while extending a path forward for legacy owners through a discounted trade-in program. The announcement came by way of Elon Musk in today’s Tesla Q1 2026 earnings call.
🚨 Our LIVE updates on the Tesla Earnings Call will take place here in a thread 🧵
Follow along below: pic.twitter.com/hzJeBitzJU
— TESLARATI (@Teslarati) April 22, 2026
The history here matters. HW3 launched in April 2019, and Tesla sold Full Self-Driving packages to owners on the understanding that the hardware was sufficient for full autonomy. Some owners paid between $8,000 and $15,000 for FSD during that period. For years, as FSD’s AI models grew more demanding, HW3 vehicles fell progressively further behind, eventually landing on FSD v12.6 in January 2025 while AI4 vehicles moved to v13 and then v14. When Musk acknowledged in January 2025 that HW3 simply could not reach unsupervised operation, and alluded to a difficult hardware retrofit.
The near-term offering is more concrete. Tesla’s head of Autopilot Ashok Elluswamy confirmed on today’s call that a V14-lite will be coming to HW3 vehicles in late June, bringing all the V14 features currently running on AI4 hardware. That is a meaningful software update for owners who have been frozen at v12.6 for over a year, and it represents genuine effort to keep older hardware relevant. Unsupervised FSD for vehicles is now targeted for Q4 2026 at the earliest, with Musk describing it as a gradual, geography-limited rollout.
For HW3 owners, the over-the-air V14-lite update is welcomed, and the discounted trade-in path at least acknowledges an old obligation. What happens next with the trade-in pricing will define how this chapter ultimately gets written. If Tesla prices the hardware path fairly, acknowledges what early adopters are owed, and delivers V14-lite on the June timeline it committed to today, it has a real opportunity to convert one of the longest-running sore subjects among early adopters into a loyalty story.