Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Advertisement

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Cybertruck

Tesla begins Cybertruck deliveries in a new region for the first time

Published

on

Credit: @derek1ee | X

Tesla has initiated Cybertruck deliveries in a new region for the first time, as the all-electric pickup has officially made its way to the United Arab Emirates, marking the newest territory to receive the polarizing truck.

Tesla launched orders for the Cybertruck in the Middle East back in September 2025, just months after the company confirmed that it planned to launch the pickup in the region, which happened in April.

I took a Tesla Cybertruck weekend Demo Drive – Here’s what I learned

By early October, Tesla launched the Cybertruck configurator in the United Arab Emirates, Qatar, and Saudi Arabia, with pricing starting at around AED 404,900, or about $110,000 for the Dual Motor configuration.

This decision positioned the Gulf states as key early international markets, and Tesla was hoping to get the Cybertruck outside of North America for the first time, as it has still been tough to launch in other popular EV markets, like Europe and Asia.

Advertisement

By late 2025, Tesla had pushed delivery timelines slightly and aimed for an early 2026 delivery launch in the Middle East. The first official customer deliveries started this month, and a notable handover event occurred in Dubai’s Al Marmoom desert area, featuring a light and fire show.

Around 63 Cybertrucks made their way to customers during the event:

As of this month, the Cybertruck still remains available for configuration on Tesla’s websites for the UAE, Saudi Arabia, Qatar, and other Middle Eastern countries like Jordan and Israel. Deliveries are rolling out progressively, with the UAE leading as the first to see hands-on customer events.

Advertisement

In other markets, most notably Europe, there are still plenty of regulatory hurdles that Tesla is hoping to work through, but they may never be resolved. The issues come from the unique design features that conflict with the European Union’s (EU) stringent safety standards.

These standards include pedestrian protection regulations, which require vehicles to minimize injury risks in collisions. However, the Cybertruck features sharp edges and an ultra-hard stainless steel exoskeleton, and its rigid structure is seen as non-compliant with the EU’s list of preferred designs.

The vehicle’s gross weight is also above the 3.5-tonne threshold for standard vehicles, which has prompted Tesla to consider a more compact design. However, the company’s focus on autonomy and Robotaxi has likely pushed that out of the realm of possibility.

For now, Tesla will work with the governments that want it to succeed in their region, and the Middle East has been a great partner to the company with the launch of the Cybertruck.

Advertisement
Continue Reading

News

BREAKING: Tesla launches public Robotaxi rides in Austin with no Safety Monitor

Published

on

Tesla has officially launched public Robotaxi rides in Austin, Texas, without a Safety Monitor in the vehicle, marking the first time the company has removed anyone from the vehicle other than the rider.

The Safety Monitor has been present in Tesla Robotaxis in Austin since its launch last June, maintaining safety for passengers and other vehicles, and was placed in the passenger’s seat.

Tesla planned to remove the Safety Monitor at the end of 2025, but it was not quite ready to do so. Now, in January, riders are officially reporting that they are able to hail a ride from a Model Y Robotaxi without anyone in the vehicle:

Advertisement

Tesla started testing this internally late last year and had several employees show that they were riding in the vehicle without anyone else there to intervene in case of an emergency.

Tesla has now expanded that program to the public. It is not active in the entire fleet, but there are a “few unsupervised vehicles mixed in with the broader robotaxi fleet with safety monitors,” Ashok Elluswamy said:

Advertisement

Tesla Robotaxi goes driverless as Musk confirms Safety Monitor removal testing

The Robotaxi program also operates in the California Bay Area, where the fleet is much larger, but Safety Monitors are placed in the driver’s seat and utilize Full Self-Driving, so it is essentially the same as an Uber driver using a Tesla with FSD.

In Austin, the removal of Safety Monitors marks a substantial achievement for Tesla moving forward. Now that it has enough confidence to remove Safety Monitors from Robotaxis altogether, there are nearly unlimited options for the company in terms of expansion.

While it is hoping to launch the ride-hailing service in more cities across the U.S. this year, this is a much larger development than expansion, at least for now, as it is the first time it is performing driverless rides in Robotaxi anywhere in the world for the public to enjoy.

Advertisement
Continue Reading

Investor's Corner

Tesla Earnings Call: Top 5 questions investors are asking

Published

on

(Credit: Tesla)

Tesla has scheduled its Earnings Call for Q4 and Full Year 2025 for next Wednesday, January 28, at 5:30 p.m. EST, and investors are already preparing to get some answers from executives regarding a wide variety of topics.

The company accepts several questions from retail investors through the platform Say, which then allows shareholders to vote on the best questions.

Tesla does not answer anything regarding future product releases, but they are willing to shed light on current timelines, progress of certain projects, and other plans.

There are five questions that range over a variety of topics, including SpaceX, Full Self-Driving, Robotaxi, and Optimus, which are currently in the lead to be asked and potentially answered by Elon Musk and other Tesla executives:

SpaceX IPO is coming, CEO Elon Musk confirms

Advertisement
  1. You once said: Loyalty deserves loyalty. Will long-term Tesla shareholders still be prioritized if SpaceX does an IPO?
    1. Our Take – With a lot of speculation regarding an incoming SpaceX IPO, Tesla investors, especially long-term ones, should be able to benefit from an early opportunity to purchase shares. This has been discussed endlessly over the past year, and we must be getting close to it.
  2. When is FSD going to be 100% unsupervised?
    1. Our Take – Musk said today that this is essentially a solved problem, and it could be available in the U.S. by the end of this year.
  3. What is the current bottleneck to increase Robotaxi deployment & personal use unsupervised FSD? The safety/performance of the most recent models or people to monitor robots, robotaxis, in-car, or remotely? Or something else?
    1. Our Take – The bottleneck seems to be based on data, which Musk said Tesla needs 10 billion miles of data to achieve unsupervised FSD. Once that happens, regulatory issues will be what hold things up from moving forward.
  4. Regarding Optimus, could you share the current number of units deployed in Tesla factories and actively performing production tasks? What specific roles or operations are they handling, and how has their integration impacted factory efficiency or output?
    1. Our Take – Optimus is going to have a larger role in factories moving forward, and later this year, they will have larger responsibilities.
  5. Can you please tie purchased FSD to our owner accounts vs. locked to the car? This will help us enjoy it in any Tesla we drive/buy and reward us for hanging in so long, some of us since 2017.
    1. Our Take – This is a good one and should get us some additional information on the FSD transfer plans and Subscription-only model that Tesla will adopt soon.

Tesla will have its Earnings Call on Wednesday, January 28.

Continue Reading