Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

Advertisement

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Tesla confirmed HW3 can’t do Unsupervised FSD but there’s more to the story

Tesla confirmed HW3 vehicles cannot run unsupervised FSD, replacing its free upgrade promise with a discounted trade-in.

Published

on

By

tesla autopilot

Tesla has officially confirmed that early vehicles with its Autopilot Hardware 3 (HW3) will not be capable of unsupervised Full Self-Driving, while extending a path forward for legacy owners through a discounted trade-in program. The announcement came by way of Elon Musk in today’s Tesla Q1 2026 earnings call.

The history here matters. HW3 launched in April 2019, and Tesla sold Full Self-Driving packages to owners on the understanding that the hardware was sufficient for full autonomy. Some owners paid between $8,000 and $15,000 for FSD during that period. For years, as FSD’s AI models grew more demanding, HW3 vehicles fell progressively further behind, eventually landing on FSD v12.6 in January 2025 while AI4 vehicles moved to v13 and then v14. When Musk acknowledged in January 2025 that HW3 simply could not reach unsupervised operation, and alluded to a difficult hardware retrofit.

Advertisement

The near-term offering is more concrete. Tesla’s head of Autopilot Ashok Elluswamy confirmed on today’s call that a V14-lite will be coming to HW3 vehicles in late June, bringing all the V14 features currently running on AI4 hardware. That is a meaningful software update for owners who have been frozen at v12.6 for over a year, and it represents genuine effort to keep older hardware relevant. Unsupervised FSD for vehicles is now targeted for Q4 2026 at the earliest, with Musk describing it as a gradual, geography-limited rollout.

For HW3 owners, the over-the-air V14-lite update is welcomed, and the discounted trade-in path at least acknowledges an old obligation. What happens next with the trade-in pricing will define how this chapter ultimately gets written. If Tesla prices the hardware path fairly, acknowledges what early adopters are owed, and delivers V14-lite on the June timeline it committed to today, it has a real opportunity to convert one of the longest-running sore subjects among early adopters into a loyalty story.

Continue Reading

Elon Musk

Tesla isn’t joking about building Optimus at an industrial scale: Here we go

Tesla’s Optimus factory in Texas targets 10 million robots yearly, with 5.2 million square feet under construction.

Published

on

By

Tesla’s Q1 2026 Update Letter, released today, confirms that first generation Optimus production lines are now well underway at its Fremont, California factory, with a pilot line targeting one million robots per year to start. Of bigger note is a shared aerial image of a large piece of land adjacent to Gigafactory Texas, that Tesla has prominently labeled “Optimus factory site preparation.”

Permit documents show Tesla is seeking to add over 5.2 million square feet of new building space to the Giga Texas North Campus by the end of 2026, at an estimated construction investment of $5 billion to $10 billion. The longer term production target for that facility is 10 million Optimus units per year. Giga Texas already sits on 2,500 acres with over 10 million square feet of existing factory floor, and the North Campus expansion is being built to support multiple projects, including the dedicated Optimus factory, the Terafab chip fabrication facility (a joint Tesla/SpaceX/xAI venture), a Cybercab test track, road infrastructure, and supporting facilities.

Credit: TESLA

Texas makes strategic sense beyond the existing infrastructure. The state’s tax structure, lower labor costs relative to California, and the proximity to Tesla’s AI training cluster Cortex 1 and 2, both located at Giga Texas and now totaling over 230,000 H100 equivalent GPUs, means the Optimus software stack and the factory producing the hardware will share the same campus. Tesla’s Q1 report also confirmed completion of the AI5 chip tape out in April, the inference processor designed specifically to power Optimus units in the field.

As Teslarati reported, the Texas facility is intended to house Optimus V4 production at full scale. Musk told the World Economic Forum in January that Tesla plans to sell Optimus to the public by end of 2027 at a price between $20,000 and $30,000, stating, “I think everyone on earth is going to have one and want one.” He has previously pegged long term demand for general purpose humanoid robots at over 20 billion units globally, citing both consumer and industrial use cases.

Advertisement
Continue Reading

Investor's Corner

Tesla (TSLA) Q1 2026 earnings results: beat on EPS and revenues

Published

on

Credit: Tesla

Tesla (NASDAQ: TSLA) reported its earnings for the first quarter of 2026 on Wednesday afternoon. Here’s what the company reported compared to what Wall Street analysts expected.

The earnings results come after Tesla reported a miss on vehicle deliveries for the first quarter, delivering 358,023 vehicles and building 408,386 cars during the three-month span.

As Tesla transitions more toward AI and sees itself as less of a car company, expectations for deliveries will begin to become less of a central point in the consensus of how the quarter is perceived.

Nevertheless, Tesla is leaning on its strong foundation as a car company to carry forward its AI ambitions. The first quarter is a good ground layer for the rest of the year.

Advertisement

Tesla Q1 2026 Earnings Results

Tesla’s Earnings Results are as follows:

  • Non-GAAP EPS – $0.41 Reported vs. $0.36 Expected
  • Revenues – $22.387 billion vs. $22.35 billion Expected
  • Free Cash Flow – $1.444 billion
  • Profit – $4.72 billion

Tesla beat analyst expectations, so it will be interesting to see how the stock responds. IN the past, we’ve seen Tesla beat analyst expectations considerably, followed by a sharp drop in stock price.

On the same token, we’ve seen Tesla miss and the stock price go up the following trading session.

Tesla will hold its Q1 2026 Earnings Call in about 90 minutes at 5:30 p.m. on the East Coast. Remarks will be made by CEO Elon Musk and other executives, who will shed some light on the investor questions that we covered earlier this week.

You can stream it below. Additionally, we will be doing our Live Blog on X and Facebook.

Advertisement

Continue Reading