Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

Advertisement

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Elon Musk’s last manually driven Tesla will do something no other production car will do

Elon Musk confirmed the Roadster as Tesla’s last manually driven car, with a debut coming soon.

Published

on

By

Tesla Roadster driving along sunset cliff (Credit: Grok)

During Tesla’s Q1 2026 earnings call on April 22, Elon Musk made a brief but notable comment about the long-awaited next generation Roadster while describing Tesla’s future vehicle lineup. “Long term, the only manually driven car will be the new Tesla Roadster,” he said. “Speaking of which, we may be able to debut that in a month or so. It requires a lot of testing and validation before we can actually have a demo and not have something go wrong with the demo.”

That single statement is the entire Roadster update from yesterday’s call, and while it represents another timeline shift, it comes as no surprise with Tesla heads-down-at-work on the mass rollout of its Robotaxi service across US cities, and the industrial scale production of the humanoid Optimus.

The fact that Musk specifically framed the Roadster as the last manually driven Tesla is significant on its own. As the rest of the lineup moves toward full autonomy, the Roadster becomes something rare in the Tesla-sphere by keeping the driver in control. Driving enthusiasts who buy a $200,000 supercar are not doing so to be passengers. They want the physical connection to the road, the feel of acceleration under their own input, and the experience of controlling something with that level of performance. FSD, however capable it becomes, removes that entirely. The Roadster signals that Tesla understands this distinction and is building a car specifically for the people who consider driving itself the point.

Tesla isn’t joking about building Optimus at an industrial scale: Here we go

Advertisement

The specs for the Roadster Musk has teased over the years are genuinely unlike anything in production. The base model targets 0 to 60 mph in 1.9 seconds, a top speed above 250 mph, and up to 620 miles of range from a 200 kWh battery. The optional SpaceX package takes it further, rumored to add roughly ten cold gas thrusters operating at 10,000 psi, borrowed directly from Falcon 9 rocket technology. With thrusters, Musk has claimed 0 to 60 mph in as little as 1.1 seconds. In a 2021 Joe Rogan interview he went further, stating “I want it to hover. We got to figure out how to make it hover without killing people.” Tesla filed a patent for ground effect technology in August 2025, suggesting the hover concept has not been abandoned. The starting price remains $200,000, with the Founders Series requiring a $250,000 full deposit. Some reservation holders placed those deposits in 2017 and are approaching a full decade of waiting.

With production now targeted for 2027 or 2028 at the earliest, the Roadster remains Tesla’s most audacious promise and its longest-running delay. But if what Musk is testing lives up to even half of what he has described, the demo alone should be worth waiting for.

Continue Reading

Elon Musk

Tesla confirmed HW3 can’t do Unsupervised FSD but there’s more to the story

Tesla confirmed HW3 vehicles cannot run unsupervised FSD, replacing its free upgrade promise with a discounted trade-in.

Published

on

By

tesla autopilot

Tesla has officially confirmed that early vehicles with its Autopilot Hardware 3 (HW3) will not be capable of unsupervised Full Self-Driving, while extending a path forward for legacy owners through a discounted trade-in program. The announcement came by way of Elon Musk in today’s Tesla Q1 2026 earnings call.

The history here matters. HW3 launched in April 2019, and Tesla sold Full Self-Driving packages to owners on the understanding that the hardware was sufficient for full autonomy. Some owners paid between $8,000 and $15,000 for FSD during that period. For years, as FSD’s AI models grew more demanding, HW3 vehicles fell progressively further behind, eventually landing on FSD v12.6 in January 2025 while AI4 vehicles moved to v13 and then v14. When Musk acknowledged in January 2025 that HW3 simply could not reach unsupervised operation, and alluded to a difficult hardware retrofit.

Advertisement

The near-term offering is more concrete. Tesla’s head of Autopilot Ashok Elluswamy confirmed on today’s call that a V14-lite will be coming to HW3 vehicles in late June, bringing all the V14 features currently running on AI4 hardware. That is a meaningful software update for owners who have been frozen at v12.6 for over a year, and it represents genuine effort to keep older hardware relevant. Unsupervised FSD for vehicles is now targeted for Q4 2026 at the earliest, with Musk describing it as a gradual, geography-limited rollout.

For HW3 owners, the over-the-air V14-lite update is welcomed, and the discounted trade-in path at least acknowledges an old obligation. What happens next with the trade-in pricing will define how this chapter ultimately gets written. If Tesla prices the hardware path fairly, acknowledges what early adopters are owed, and delivers V14-lite on the June timeline it committed to today, it has a real opportunity to convert one of the longest-running sore subjects among early adopters into a loyalty story.

Continue Reading

Elon Musk

Tesla isn’t joking about building Optimus at an industrial scale: Here we go

Tesla’s Optimus factory in Texas targets 10 million robots yearly, with 5.2 million square feet under construction.

Published

on

By

Tesla’s Q1 2026 Update Letter, released today, confirms that first generation Optimus production lines are now well underway at its Fremont, California factory, with a pilot line targeting one million robots per year to start. Of bigger note is a shared aerial image of a large piece of land adjacent to Gigafactory Texas, that Tesla has prominently labeled “Optimus factory site preparation.”

Permit documents show Tesla is seeking to add over 5.2 million square feet of new building space to the Giga Texas North Campus by the end of 2026, at an estimated construction investment of $5 billion to $10 billion. The longer term production target for that facility is 10 million Optimus units per year. Giga Texas already sits on 2,500 acres with over 10 million square feet of existing factory floor, and the North Campus expansion is being built to support multiple projects, including the dedicated Optimus factory, the Terafab chip fabrication facility (a joint Tesla/SpaceX/xAI venture), a Cybercab test track, road infrastructure, and supporting facilities.

Credit: TESLA

Texas makes strategic sense beyond the existing infrastructure. The state’s tax structure, lower labor costs relative to California, and the proximity to Tesla’s AI training cluster Cortex 1 and 2, both located at Giga Texas and now totaling over 230,000 H100 equivalent GPUs, means the Optimus software stack and the factory producing the hardware will share the same campus. Tesla’s Q1 report also confirmed completion of the AI5 chip tape out in April, the inference processor designed specifically to power Optimus units in the field.

As Teslarati reported, the Texas facility is intended to house Optimus V4 production at full scale. Musk told the World Economic Forum in January that Tesla plans to sell Optimus to the public by end of 2027 at a price between $20,000 and $30,000, stating, “I think everyone on earth is going to have one and want one.” He has previously pegged long term demand for general purpose humanoid robots at over 20 billion units globally, citing both consumer and industrial use cases.

Advertisement
Continue Reading