News
Google’s DeepMind unit develops AI that predicts 3D layouts from partial images
Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.
According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.
The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.
Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.
As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]
In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.
During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN.
Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.
News
Tesla Robotaxi’s biggest rival sends latest statement with big expansion
The new expanded geofence now covers a broader region of Austin and its metropolitan areas, extended south to Manchaca and north beyond US-183.
Tesla Robotaxi’s biggest rival sent its latest statement earlier this month by making a big expansion to its geofence, pushing the limits up by over 50 percent and nearing Tesla’s size.
Waymo announced earlier this month that it was expanding its geofence in Austin by slightly over 50 percent, now servicing an area of 140 square miles, over the previous 90 square miles that it has been operating in since July 2025.
Tesla CEO Elon Musk shades Waymo: ‘Never really had a chance’
The new expanded geofence now covers a broader region of Austin and its metropolitan areas, extended south to Manchaca and north beyond US-183.
These rides are fully driverless, which sets them apart from Tesla slightly. Tesla operates its Robotaxi program in Austin with a Safety Monitor in the passenger’s seat on local roads and in the driver’s seat for highway routes.
It has also tested fully driverless Robotaxi services internally in recent weeks, hoping to remove Safety Monitors in the near future, after hoping to do so by the end of 2025.
Tesla Robotaxi service area vs. Waymo’s new expansion in Austin, TX. pic.twitter.com/7cnaeiduKY
— Nic Cruz Patane (@niccruzpatane) January 13, 2026
Although Waymo’s geofence has expanded considerably, it still falls short of Tesla’s by roughly 31 square miles, as the company’s expansion back in late 2025 put it up to roughly 171 square miles.
There are several differences between the two operations apart from the size of the geofence and the fact that Waymo is able to operate autonomously.
Waymo emphasizes mature, fully autonomous operations in a denser but smaller area, while Tesla focuses on more extensive coverage and fleet scaling potential, especially with the potential release of Cybercab and a recently reached milestone of 200 Robotaxis in its fleet across Austin and the Bay Area.
However, the two companies are striving to achieve the same goal, which is expanding the availability of driverless ride-sharing options across the United States, starting with large cities like Austin and the San Francisco Bay Area. Waymo also operates in other cities, like Las Vegas, Los Angeles, Orlando, Phoenix, and Atlanta, among others.
Tesla is working to expand to more cities as well, and is hoping to launch in Miami, Houston, Phoenix, Las Vegas, and Dallas.
Elon Musk
Tesla automotive will be forgotten, but not in a bad way: investor
It’s no secret that Tesla’s automotive division has been its shining star for some time. For years, analysts and investors have focused on the next big project or vehicle release, quarterly delivery frames, and progress in self-driving cars. These have been the big categories of focus, but that will all change soon.
Entrepreneur and Angel investor Jason Calacanis believes that Tesla will one day be only a shade of how it is recognized now, as its automotive side will essentially be forgotten, but not in a bad way.
It’s no secret that Tesla’s automotive division has been its shining star for some time. For years, analysts and investors have focused on the next big project or vehicle release, quarterly delivery frames, and progress in self-driving cars. These have been the big categories of focus, but that will all change soon.
I subscribed to Tesla Full Self-Driving after four free months: here’s why
Eventually, and even now, the focus has been on real-world AI and Robotics, both through the Full Self-Driving and autonomy projects that Tesla has been working on, as well as the Optimus program, which is what Calacanis believes will be the big disruptor of the company’s automotive division.
On the All-In podcast, Calcanis revealed he had visited Tesla’s Optimus lab earlier this month, where he was able to review the Optimus Gen 3 prototype and watch teams of engineers chip away at developing what CEO Elon Musk has said will be the big product that will drive the company even further into the next few decades.
Calacanis said:
“Nobody will remember that Tesla ever made a car. They will only remember the Optimus.”
He added that Musk “is going to make a billion of those.”
Musk has stated this point himself, too. He at one point said that he predicted that “Optimus will be the biggest product of all-time by far. Nothing will even be close. I think it’ll be 10 times bigger than the next biggest product ever made.”
He has also indicated that he believes 80 percent of Tesla’s value will be Optimus.
Optimus aims to totally revolutionize the way people live, and Musk has said that working will be optional due to its presence. Tesla’s hopes for Optimus truly show a crystal clear image of the future and what could be possible with humanoid robots and AI.
News
Tesla Robotaxi fleet reaches new milestone that should expel common complaint
There have been many complaints in the eight months that the Robotaxi program has been active about ride availability, with many stating that they have been confronted with excessive wait times for a ride, as the fleet was very small at the beginning of its operation.
Tesla Robotaxi is active in both the Bay Area of California and Austin, Texas, and the fleet has reached a new milestone that should expel a common complaint: lack of availability.
It has now been confirmed by Robotaxi Tracker that the fleet of Tesla’s ride-sharing vehicles has reached 200, with 158 of those being available in the Bay Area and 42 more in Austin. Despite the program first launching in Texas, the company has more vehicles available in California.
The California area of operation is much larger than it is in Texas, and the vehicle fleet is larger because Tesla operates it differently; Safety Monitors sit in the driver’s seat in California while FSD navigates. In Texas, Safety Monitors sit in the passenger’s seat, but will switch seats when routing takes them on the highway.
Tesla has also started testing rides without any Safety Monitors internally.
Tesla Robotaxi goes driverless as Musk confirms Safety Monitor removal testing
This new milestone confronts a common complaint of Robotaxi riders in Austin and the Bay, which is vehicle availability.
There have been many complaints in the eight months that the Robotaxi program has been active about ride availability, with many stating that they have been confronted with excessive wait times for a ride, as the fleet was very small at the beginning of its operation.
I attempted to take a @robotaxi ride today from multiple different locations and time of day (from 9:00 AM to about 3:00 PM in Austin but never could do so.
I always got a “High Service Demand” message … I really hope @Tesla is about to go unsupervised and greatly plus up the… pic.twitter.com/IOUQlaqPU2
— Joe Tegtmeyer 🚀 🤠🛸😎 (@JoeTegtmeyer) November 26, 2025
With that being said, there have been some who have said wait times have improved significantly, especially in the Bay, where the fleet is much larger.
Robotaxi wait times here in Silicon Valley used to be around 15 minutes for me.
Over the past few days, they’ve been consistently under five minutes, and with scaling through the end of this year, they should drop to under two minutes. pic.twitter.com/Kbskt6lUiR
— Alternate Jones (@AlternateJones) January 6, 2026
Tesla’s approach to the Robotaxi fleet has been to prioritize safety while also gathering its footing as a ride-hailing platform.
Of course, there have been and still will be growing pains, but overall, things have gone smoothly, as there have been no major incidents that would derail the company’s ability to continue developing an effective mode of transportation for people in various cities in the U.S.
Tesla plans to expand Robotaxi to more cities this year, including Miami, Las Vegas, and Houston, among several others.