Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

Advertisement

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

What is Digital Optimus? The new Tesla and xAI project explained

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Published

on

Credit: Grok

Tesla and xAI announced their groundbreaking joint project, Digital Optimus, also nicknamed “Macrohard” in a humorous jab at Microsoft, earlier this week.

This software-based AI agent is designed to automate complex office workflows by observing and replicating human interactions with computers. As the first major outcome of Tesla’s $2 billion investment in xAI, it represents a powerful fusion of hardware efficiency and advanced reasoning.

Tesla announces massive investment into xAI

At its core, Digital Optimus operates through a dual-process architecture inspired by human cognition.

Advertisement

Tesla’s specialized AI acts as “System 1”—the fast, instinctive executor—processing the past five seconds of real-time computer screen video along with keyboard and mouse actions to perform immediate tasks.

Advertisement

xAI’s Grok model serves as “System 2,” the strategic “master conductor” or navigator, providing high-level reasoning, world understanding, and directional oversight, much like an advanced turn-by-turn navigation system.

When combined, the two can create a powerful AI-based assistant that can complete everything from accounting work to HR tasks.

Will Tesla join the fold? Predicting a triple merger with SpaceX and xAI

The system runs primarily on Tesla’s low-cost AI4 inference chip, minimizing expensive Nvidia resources from xAI for competitive, real-time performance.

Advertisement

Elon Musk described it as “the only real-time smart AI system” capable, in principle, of emulating the functions of entire companies, handling everything from accounting and HR to repetitive digital operations.

Timelines point to swift deployment. Announced just days ago, Musk expects Digital Optimus to be ready for user experience within about six months, targeting rollout around September 2026.

It will integrate into all AI4-equipped Tesla vehicles, enabling parked cars to handle office work during downtime. Millions of dedicated units are also planned for deployment at Supercharger stations, tapping into roughly 7 gigawatts of available power.

Digital Optimus directly supports Tesla’s broader autonomy strategy. It leverages the same end-to-end neural networks, computer vision, and real-time decision-making tech that power Full Self-Driving (FSD) software and the physical Optimus humanoid robot.

By repurposing idle vehicle compute and extending AI4 hardware beyond driving, the project scales Tesla’s autonomy ecosystem from roads to digital workspaces.

Advertisement

As a virtual counterpart to physical Optimus, it divides labor: software agents manage screen-based tasks while humanoid robots tackle physical ones, accelerating Tesla’s vision of general-purpose AI for productivity, Robotaxi fleets, and beyond.

In essence, Digital Optimus bridges Tesla’s vehicle and robotics autonomy with enterprise-scale AI, promising massive efficiency gains. No other company currently matches its real-time capabilities on such accessible hardware.

It really could be one of the most crucial developments Tesla and xAI begin to integrate, as it could revolutionize how people work and travel.

Advertisement
Continue Reading

News

Tesla adds awesome new driving feature to Model Y

Tesla is rolling out a new “Comfort Braking” feature with Software Update 2026.8. The feature is exclusive to the new Model Y, and is currently unavailable for any other vehicle in the Tesla lineup.

Published

on

Credit: Tesla

Tesla is adding an awesome new driving feature to Model Y vehicles, effective on Juniper-updated models considered model year 2026 or newer.

Tesla is rolling out a new “Comfort Braking” feature with Software Update 2026.8. The feature is exclusive to the new Model Y, and is currently unavailable for any other vehicle in the Tesla lineup.

Tesla writes in the release notes for the feature:

“Your Tesla now provides a smoother feel as you come to a complete stop during routine braking.”

Advertisement

Interestingly, we’re not too sure what catalyzed Tesla to try to improve braking smoothness, because it hasn’t seemed overly abrupt or rough from my perspective. Although the brake pedal in my Model Y is rarely used due to Regenerative Braking, it seems Tesla wanted to try to make the ride comfort even smoother for owners.

Advertisement

There is always room for improvement, though, and it seems that there is a way to make braking smoother for passengers while the vehicle is coming to a stop.

This is far from the first time Tesla has attempted to improve its ride comfort through Over-the-Air updates, as it has rolled out updates to improve regenerative braking performance, handling while using Full Self-Driving, improvements to Steer-by-Wire to Cybertruck, and even recent releases that have combatted Active Road Noise.

Tesla set to activate long-awaited Cybertruck feature

Tesla holds a unique ability to change the functionality of its vehicles through software updates, which have come in handy for many things, including remedying certain recalls and shipping new features to the Full Self-Driving suite.

Advertisement

Tesla seems to have the most seamless OTA processes, as many automakers have the ability to ship improvements through a simple software update.

We’re really excited to test the update, so when we get an opportunity to try out Comfort Braking when it makes it to our Model Y.

Continue Reading

News

Tesla finally brings a Robotaxi update that Android users will love

The breakdown of the software version shows that Tesla is actively developing an Android-compatible version of the Robotaxi app, and the company is developing Live Activities for Android.

Published

on

Credit: Grok

Tesla is finally bringing an update of its Robotaxi platform that Android users will love — mostly because it seems like they will finally be able to use the ride-hailing platform that the company has had active since last June.

Based on a decompile of software version 26.2.0 of the Robotaxi app, Tesla looks to be ready to roll out access to Android users.

According to the breakdown, performed by Tesla App Updates, the company is preparing to roll out an Android version of the app as it is developing several features for that operating system.

The breakdown of the software version shows that Tesla is actively developing an Android-compatible version of the Robotaxi app, and the company is developing Live Activities for Android:

“Strings like notification_channel_robotaxid_trip_name and android_native_alicorn_eta_text show exactly how Tesla plans to replicate the iOS Live Activities experience. Instead of standard push alerts, Android users are getting a persistent, dynamically updating notification channel.”

Advertisement

This is a big step forward for several reasons. From a face-value perspective, Tesla is finally ready to offer Robotaxi to Android users.

The company has routinely prioritized Apple releases because there is a higher concentration of iPhone users in its ownership base. Additionally, the development process for Apple is simply less laborious.

Tesla is working to increase Android capabilities in its vehicles

Secondly, the Robotaxi rollout has been a typical example of “slowly then all at once.”

Advertisement

Tesla initially released Robotaxi access to a handful of media members and influencers. Eventually, it was expanded to more users, so that anyone using an iOS device could download the app and hail a semi-autonomous ride in Austin or the Bay Area.

Opening up the user base to Android users may show that Tesla is preparing to allow even more users to utilize its Robotaxi platform, and although it seems to be a few months away from only offering fully autonomous rides to anyone with app access, the expansion of the user base to an entirely different user base definitely seems like its a step in the right direction.

Continue Reading