Microsoft’s interest in expanding its Azure cloud computing service to include artificial intelligence (AI) supercomputing technologies has led to a new partnership agreement with the Elon Musk-backed company, OpenAI. An investment of $1 billion dollars was recently made by Microsoft into the venture to develop an Azure-based hardware and software platform that will scale to artificial general intelligence (AGI). In turn, OpenAI will use Microsoft as their exclusive cloud provider.
OpenAI is a nonprofit AI research organization co-founded by Musk, serial entrepreneur Peter Thiel, and Y Combinator’s Sam Altman with the goal of developing beneficial, open source AI to combat any future rise of harmful AI. Musk stepped down from the Board of Directors in early 2018 to avoid any conflicts with Tesla’s Autopilot program; however, he still remains as a benefactor and advisor. Tesla’s Director of AI and Autopilot Vision, Andrej Karpathy, previously worked as a neural network researcher for OpenAI.
While the venture is backed by significant private investment, the long-term goals of OpenAI require even greater resources. The company’s motivation to create the new investment partnership with Microsoft was partially due to financial constraints caused by computing hardware needs. The financial requirements to retain top talent are also significant – OpenAI’s tax filings from 2016 revealed its top researcher was paid a $1.9 million dollar salary, with others receiving significant amounts as well.

“OpenAI is producing a sequence of increasingly powerful AI technologies, which requires a lot of capital for computational power. The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them,” OpenAI’s press release announcing the new partnership explained.
The connection between Microsoft and OpenAI is not new. In 2016, the companies jointly announced they were working together to run most of OpenAI’s large-scale experiments on Azure, making it their primary cloud platform for deep learning and AI. Azure had hardware configurations optimized for AI computing needs and a roadmap to expand those capabilities even further. One of the stated joint goals between Microsoft and OpenAI is the democratization of AI, and cloud computing is a large part of making that a reality as hardware and software resources are no longer required to be local to the user.
OpenAI has already created some impressive AI capabilities. In August last year, company bots created for the video game Dota 2 defeated a team of highly skilled human players in two games out of three. To accomplish the task, serious amounts of hardware and training were required. The nonprofit research lab employed a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 cores to complete roughly 180 years worth of gameplay every day through reinforcement learning, which allowed the bots to develop advanced skills for the game. An open source gym for training AI with games was also released by the company.
In 2017, OpenAI announced that it had successfully trained its AI-powered robots to perform a task after watching it once in virtual reality. After showing a robot how to stack a series of colored blocks in a virtual reality simulation, it was then able to successfully mimic the actions. To accomplish this, OpenAI trained the robot in a simulated, virtual environment with nuances like lighting, shadows and backgrounds noise so that when in the real environment, it knew to filter out noise and focus on only important elements as a human brain would.
OpenAI also successfully taught AI bots to create their own language for communicating with each other in 2017. A paper was published on the topic which explained how the bots used reinforcement learning to accomplish simple goals through trial and error. After being given clues such as “Go to” or “Look at” by the researchers, the bots were then required to create their own machine language to communicate with each other.
The company’s latest commitment to Microsoft will now expand their access to resources to achieve even more impressive artificial intelligence feats.
News
Ford is charging for a basic EV feature on the Mustang Mach-E
When ordering a new Ford Mustang Mach-E, you’ll now be hit with an additional fee for one basic EV feature: the frunk.
Ford is charging an additional fee for a basic EV feature on its Mustang Mach-E, its most popular electric vehicle offering.
Ford has shuttered its initial Model e program, but is venturing into a more controlled and refined effort, and it is abandoning the F-150 Lightning in favor of a new pickup that is currently under design, but appears to have some favorable features.
However, ordering a new Mustang Mach-E now comes with an additional fee for one basic EV feature: the frunk.
The frunk is the front trunk, and due to the lack of a large engine in the front of an electric vehicle, OEMs are able to offer additional storage space under the hood. There’s one problem, though, and that is that companies appear to be recognizing that they can remove it for free while offering the function for a fee.
Ford is now charging $495 on the Mustang Mach-E frunk (front trunk). What are your thoughts on that? pic.twitter.com/EOzZe3z9ZQ
— Alan of TesCalendar 📆⚡️ (@TesCalendar1) February 24, 2026
Ford is charging $495 for the frunk.
Interestingly, the frunk size varies by vehicle, but the Mustang Mach-E features a 4.7 to 4.8 cubic-foot-sized frunk, which measures approximately 9 inches deep, 26 inches wide, and 14 inches high.
When the vehicle was first released, Ford marketed the frunk as the ultimate tailgating feature, showing it off as a perfect place to store and serve cold shrimp cocktail.
Ford Mach-E frunk is perfect for chowders and chicken wings, and we’re not even joking
It appears the decision to charge for what is a simple advantage of an EV is not going over well, as even Ford loyal customers say the frunk is a “basic expectation” of an EV. Without it, it seems as if fans feel the company is nickel-and-diming its customers.
It will be pretty interesting to see the Mach-E without a frunk, and while it should not be enough to turn people away from potentially buying the vehicle, it seems the decision to add an additional charge to include one will definitely annoy some customers.
News
Tesla to improve one of its best features, coding shows
According to the update, Tesla will work on improving the headlights when coming into contact with highly reflective objects, including road signs, traffic signs, and street lights. Additionally, pixel-level dimming will happen in two stages, whereas it currently performs with just one, meaning on or off.
Tesla is looking to upgrade its Matrix Headlights, a unique and high-tech feature that is available on several of its vehicles. The headlights aim to maximize visibility for Tesla drivers while being considerate of oncoming traffic.
The Matrix Headlights Tesla offers utilize dimming of individual light pixels to ensure that visibility stays high for those behind the wheel, while also being considerate of other cars by decreasing the brightness in areas where other cars are traveling.
Here’s what they look like in action:
- Credit: u/ObjectiveScratch | Reddit
- Credit: u/ObjectiveScratch | Reddit
As you can see, the Matrix headlight system intentionally dims the area where oncoming cars would be impacted by high beams. This keeps visibility at a maximum for everyone on the road, including those who could be hit with bright lights in their eyes.
There are still a handful of complaints from owners, however, but Tesla appears to be looking to resolve these with the coming updates in a Software Version that is currently labeled 2026.2.xxx. The coding was spotted by X user BERKANT:
🚨 Tesla is quietly upgrading Matrix headlights.
Software https://t.co/pXEklQiXSq reveals a hidden feature:
matrix_two_stage_reflection_dip
This is a major step beyond current adaptive high beams.
What it means:
• The car detects highly reflective objects
Road signs,… pic.twitter.com/m5UpQJFA2n— BERKANT (@Tesla_NL_TR) February 24, 2026
According to the update, Tesla will work on improving the headlights when coming into contact with highly reflective objects, including road signs, traffic signs, and street lights. Additionally, pixel-level dimming will happen in two stages, whereas it currently performs with just one, meaning on or off.
Finally, the new system will prevent the high beams from glaring back at the driver. The system is made to dim when it recognizes oncoming cars, but not necessarily objects that could produce glaring issues back at the driver.
Tesla’s revolutionary Matrix headlights are coming to the U.S.
This upgrade is software-focused, so there will not need to be any physical changes or upgrades made to Tesla vehicles that utilize the Matrix headlights currently.
Elon Musk
xAI’s Grok approved for Pentagon classified systems: report
Under the agreement, Grok can be deployed in systems handling classified intelligence analysis, weapons development, and battlefield operations.
Elon Musk’s xAI has signed an agreement with the United States Department of Defense (DoD) to allow Grok to be used in classified military systems.
Previously, Anthropic’s Claude had been the only AI system approved for the most sensitive military work, but a dispute over usage safeguards has reportedly prompted the Pentagon to broaden its options, as noted in a report from Axios.
Under the agreement, Grok can be deployed in systems handling classified intelligence analysis, weapons development, and battlefield operations.
The publication reported that xAI agreed to the Pentagon’s requirement that its technology be usable for “all lawful purposes,” a standard Anthropic has reportedly resisted due to alleged ethical restrictions tied to mass surveillance and autonomous weapons use.
Defense Secretary Pete Hegseth is scheduled to meet with Anthropic CEO Dario Amodei in what sources expect to be a tense meeting, with the publication hinting that the Pentagon could designate Anthropic a “supply chain risk” if the company does not lift its safeguards.
Axios stated that replacing Claude fully might be technically challenging even if xAI or other alternative AI systems take its place. That being said, other AI systems are already in use by the DoD.
Grok already operates in the Pentagon’s unclassified systems alongside Google’s Gemini and OpenAI’s ChatGPT. Google is reportedly close to an agreement that will result in Gemini being used for classified use, while OpenAI’s progress toward classified deployment is described as slower but still feasible.
The publication noted that the Pentagon continues talks with several AI companies as it prepares for potential changes in classified AI sourcing.

