News
Google’s DeepMind unit develops AI that predicts 3D layouts from partial images
Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.
According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.
The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.
Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.
As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]
In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.
During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN.
Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.
News
BREAKING: Tesla launches public Robotaxi rides in Austin with no Safety Monitor
Tesla has officially launched public Robotaxi rides in Austin, Texas, without a Safety Monitor in the vehicle, marking the first time the company has removed anyone from the vehicle other than the rider.
The Safety Monitor has been present in Tesla Robotaxis in Austin since its launch last June, maintaining safety for passengers and other vehicles, and was placed in the passenger’s seat.
Tesla planned to remove the Safety Monitor at the end of 2025, but it was not quite ready to do so. Now, in January, riders are officially reporting that they are able to hail a ride from a Model Y Robotaxi without anyone in the vehicle:
I am in a robotaxi without safety monitor pic.twitter.com/fzHu385oIb
— TSLA99T (@Tsla99T) January 22, 2026
Tesla started testing this internally late last year and had several employees show that they were riding in the vehicle without anyone else there to intervene in case of an emergency.
Tesla has now expanded that program to the public. It is not active in the entire fleet, but there are a “few unsupervised vehicles mixed in with the broader robotaxi fleet with safety monitors,” Ashok Elluswamy said:
Robotaxi rides without any safety monitors are now publicly available in Austin.
Starting with a few unsupervised vehicles mixed in with the broader robotaxi fleet with safety monitors, and the ratio will increase over time. https://t.co/ShMpZjefwB
— Ashok Elluswamy (@aelluswamy) January 22, 2026
Tesla Robotaxi goes driverless as Musk confirms Safety Monitor removal testing
The Robotaxi program also operates in the California Bay Area, where the fleet is much larger, but Safety Monitors are placed in the driver’s seat and utilize Full Self-Driving, so it is essentially the same as an Uber driver using a Tesla with FSD.
In Austin, the removal of Safety Monitors marks a substantial achievement for Tesla moving forward. Now that it has enough confidence to remove Safety Monitors from Robotaxis altogether, there are nearly unlimited options for the company in terms of expansion.
While it is hoping to launch the ride-hailing service in more cities across the U.S. this year, this is a much larger development than expansion, at least for now, as it is the first time it is performing driverless rides in Robotaxi anywhere in the world for the public to enjoy.
Investor's Corner
Tesla Earnings Call: Top 5 questions investors are asking
Tesla has scheduled its Earnings Call for Q4 and Full Year 2025 for next Wednesday, January 28, at 5:30 p.m. EST, and investors are already preparing to get some answers from executives regarding a wide variety of topics.
The company accepts several questions from retail investors through the platform Say, which then allows shareholders to vote on the best questions.
Tesla does not answer anything regarding future product releases, but they are willing to shed light on current timelines, progress of certain projects, and other plans.
There are five questions that range over a variety of topics, including SpaceX, Full Self-Driving, Robotaxi, and Optimus, which are currently in the lead to be asked and potentially answered by Elon Musk and other Tesla executives:
- You once said: Loyalty deserves loyalty. Will long-term Tesla shareholders still be prioritized if SpaceX does an IPO?
- Our Take – With a lot of speculation regarding an incoming SpaceX IPO, Tesla investors, especially long-term ones, should be able to benefit from an early opportunity to purchase shares. This has been discussed endlessly over the past year, and we must be getting close to it.
- When is FSD going to be 100% unsupervised?
- Our Take – Musk said today that this is essentially a solved problem, and it could be available in the U.S. by the end of this year.
- What is the current bottleneck to increase Robotaxi deployment & personal use unsupervised FSD? The safety/performance of the most recent models or people to monitor robots, robotaxis, in-car, or remotely? Or something else?
- Our Take – The bottleneck seems to be based on data, which Musk said Tesla needs 10 billion miles of data to achieve unsupervised FSD. Once that happens, regulatory issues will be what hold things up from moving forward.
- Regarding Optimus, could you share the current number of units deployed in Tesla factories and actively performing production tasks? What specific roles or operations are they handling, and how has their integration impacted factory efficiency or output?
- Our Take – Optimus is going to have a larger role in factories moving forward, and later this year, they will have larger responsibilities.
- Can you please tie purchased FSD to our owner accounts vs. locked to the car? This will help us enjoy it in any Tesla we drive/buy and reward us for hanging in so long, some of us since 2017.
- Our Take – This is a good one and should get us some additional information on the FSD transfer plans and Subscription-only model that Tesla will adopt soon.
Tesla will have its Earnings Call on Wednesday, January 28.
Elon Musk
Elon Musk shares incredible detail about Tesla Cybercab efficiency
Elon Musk shared an incredible detail about Tesla Cybercab’s potential efficiency, as the company has hinted in the past that it could be one of the most affordable vehicles to operate from a per-mile basis.
ARK Invest released a report recently that shed some light on the potential incremental cost per mile of various Robotaxis that will be available on the market in the coming years.
The Cybercab, which is detailed for the year 2030, has an exceptionally low cost of operation, which is something Tesla revealed when it unveiled the vehicle a year and a half ago at the “We, Robot” event in Los Angeles.
Musk said on numerous occasions that Tesla plans to hit the $0.20 cents per mile mark with the Cybercab, describing a “clear path” to achieving that figure and emphasizing it is the “full considered” cost, which would include energy, maintenance, cleaning, depreciation, and insurance.
Probably true
— Elon Musk (@elonmusk) January 22, 2026
ARK’s report showed that the Cybercab would be roughly half the cost of the Waymo 6th Gen Robotaxi in 2030, as that would come in at around $0.40 per mile all in. Cybercab, at scale, would be at $0.20.

Credit: ARK Invest
This would be a dramatic decrease in the cost of operation for Tesla, and the savings would then be passed on to customers who choose to utilize the ride-sharing service for their own transportation needs.
The U.S. average cost of new vehicle ownership is about $0.77 per mile, according to AAA. Meanwhile, Uber and Lyft rideshares often cost between $1 and $4 per mile, while Waymo can cost between $0.60 and $1 or more per mile, according to some estimates.
Tesla’s engineering has been the true driver of these cost efficiencies, and its focus on creating a vehicle that is as cost-effective to operate as possible is truly going to pay off as the vehicle begins to scale. Tesla wants to get the Cybercab to about 5.5-6 miles per kWh, which has been discussed with prototypes.
Additionally, fewer parts due to the umboxed manufacturing process, a lower initial cost, and eliminating the need to pay humans for their labor would also contribute to a cheaper operational cost overall. While aspirational, all of the ingredients for this to be a real goal are there.
It may take some time as Tesla needs to hammer the manufacturing processes, and Musk has said there will be growing pains early. This week, he said regarding the early production efforts:
“…initial production is always very slow and follows an S-curve. The speed of production ramp is inversely proportionate to how many new parts and steps there are. For Cybercab and Optimus, almost everything is new, so the early production rate will be agonizingly slow, but eventually end up being insanely fast.”