Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

News

Tesla Robotaxi has a highly-requested hardware feature not available on typical Model Ys

These camera washers are crucial for keeping the operation going, as they are the sole way Teslas operate autonomously. The cameras act as eyes for the car to drive, recognize speed limit and traffic signs, and travel safely.

Published

on

Credit: David Moss | X

Tesla Robotaxi has a highly-requested hardware feature that is not available on typical Model Ys that people like you and me bring home after we buy them. The feature is something that many have been wanting for years, especially after the company adopted a vision-only approach to self-driving.

After Tesla launched driverless Robotaxi rides to the public earlier this week in Austin, people have been traveling to the Lone Star State in an effort to hopefully snag a ride from one of the few vehicles in the fleet that are now no longer required to have Safety Monitors present.

BREAKING: Tesla launches public Robotaxi rides in Austin with no Safety Monitor

Although only a few of those completely driverless rides are available, there have been some new things seen on these cars that are additions from regular Model Ys, including the presence of one new feature: camera washers.

With the Model Y, there has been a front camera washer, but the other exterior “eyes” have been void of any solution for this. For now, owners are required to clean them manually.

In Austin, Tesla is doing things differently. It is now utilizing camera washers on the side repeater and rear bumper cameras, which will keep the cameras clean and keep operation as smooth and as uninterrupted as possible:

Advertisement

These camera washers are crucial for keeping the operation going, as they are the sole way Teslas operate autonomously. The cameras act as eyes for the car to drive, recognize speed limit and traffic signs, and travel safely.

This is the first time we are seeing them, so it seems as if Safety Monitors might have been responsible for keeping the lenses clean and unobstructed previously.

However, as Tesla transitions to a fully autonomous self-driving suite and Robotaxi expands to more vehicles in the Robotaxi fleet, it needed to find a way to clean the cameras without any manual intervention, at least for a short period, until they can return for interior and exterior washing.

Continue Reading

News

Tesla makes big Full Self-Driving change to reflect future plans

Published

on

tesla interior operating on full self driving
Credit: TESLARATI

Tesla made a dramatic change to the Online Design Studio to show its plans for Full Self-Driving, a major part of the company’s plans moving forward, as CEO Elon Musk has been extremely clear on the direction moving forward.

With Tesla taking a stand and removing the ability to purchase Full Self-Driving outright next month, it is already taking steps to initiate that with owners and potential buyers.

On Thursday night, the company updated its Online Design Studio to reflect that in a new move that now lists the three purchase options that are currently available: Monthly Subscription, One-Time Purchase, or Add Later:

This change replaces the former option for purchasing Full Self-Driving at the time of purchase, which was a simple and single box to purchase the suite outright. Subscriptions were activated through the vehicle exclusively.

However, with Musk announcing that Tesla would soon remove the outright purchase option, it is clearer than ever that the Subscription plan is where the company is headed.

The removal of the outright purchase option has been a polarizing topic among the Tesla community, especially considering that there are many people who are concerned about potential price increases or have been saving to purchase it for $8,000.

Advertisement

This would bring an end to the ability to pay for it once and never have to pay for it again. With the Subscription strategy, things are definitely going to change, and if people are paying for their cars monthly, it will essentially add $100 per month to their payment, pricing some people out. The price will increase as well, as Musk said on Thursday, as it improves in functionality.

Those skeptics have grown concerned that this will actually lower the take rate of Full Self-Driving. While it is understandable that FSD would increase in price as the capabilities improve, there are arguments for a tiered system that would allow owners to pay for features that they appreciate and can afford, which would help with data accumulation for the company.

Musk’s new compensation package also would require Tesla to have 10 million active FSD subscriptions, but people are not sure if this will move the needle in the correct direction. If Tesla can potentially offer a cheaper alternative that is not quite unsupervised, things could improve in terms of the number of owners who pay for it.

Continue Reading

News

Tesla Model S completes first ever FSD Cannonball Run with zero interventions

The coast-to-coast drive marked the first time Tesla’s FSD system completed the iconic, 3,000-mile route end to end with no interventions.

Published

on

A Tesla Model S has completed the first-ever full Cannonball Run using Full Self-Driving (FSD), traveling from Los Angeles to New York with zero interventions. The coast-to-coast drive marked the first time Tesla’s FSD system completed the iconic, 3,000-mile route end to end, fulfilling a long-discussed benchmark for autonomy.

A full FSD Cannonball Run

As per a report from The Drive, a 2024 Tesla Model S with AI4 and FSD v14.2.2.3 completed the 3,081-mile trip from Redondo Beach in Los Angeles to midtown Manhattan in New York City. The drive was completed by Alex Roy, a former automotive journalist and investor, along with a small team of autonomy experts.

Roy said FSD handled all driving tasks for the entirety of the route, including highway cruising, lane changes, navigation, and adverse weather conditions. The trip took a total of 58 hours and 22 minutes at an average speed of 64 mph, and about 10 hours were spent charging the vehicle. In later comments, Roy noted that he and his team cleaned out the Model S’ cameras during their stops to keep FSD’s performance optimal. 

History made

The historic trip was quite impressive, considering that the journey was in the middle of winter. This meant that FSD didn’t just deal with other cars on the road. The vehicle also had to handle extreme cold, snow, ice, slush, and rain. 

As per Roy in a post on X, FSD performed so well during the trip that the journey would have been completed faster if the Model S did not have people onboard. “Elon Musk was right. Once an autonomous vehicle is mature, most human input is error. A comedy of human errors added hours and hundreds of miles, but FSD stunned us with its consistent and comfortable behavior,” Roy wrote in a post on X.

Roy’s comments are quite notable as he has previously attempted Cannonball Runs using FSD on December 2024 and February 2025. Neither were zero intervention drives.

Continue Reading