Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Advertisement
-->

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

News

Tesla hosts Rome Mayor for first Italian FSD Supervised road demo

The event marked the first time an Italian mayor tested the advanced driver-assistance system in person in Rome’s urban streets.

Published

on

Credit: @andst7/X

Tesla definitely seems to be actively engaging European officials on FSD’s capabilities, with the company hosting Rome Mayor Roberto Gualtieri and Mobility Assessor Eugenio Patanè for a hands-on road demonstration. 

The event marked the first time an Italian mayor tested the advanced driver-assistance system in person in Rome’s urban streets. This comes amid Tesla’s push for FSD’s EU regulatory approvals in the coming year.

Rome officials experience FSD Supervised

Tesla conducted the demo using a Model 3 equipped with Full Self-Driving (Supervised), tackling typical Roman traffic including complex intersections, roundabouts, pedestrian crossings and mixed users like cars, bikes and scooters.

The system showcased AI-based assisted driving, prioritizing safety while maintaining flow. FSD also handled overtakes and lane decisions, though with constant driver supervision.

Investor Andrea Stroppa detailed the event on X, noting the system’s potential to reduce severe collision risks by up to seven times compared to traditional driving, based on Tesla’s data from billions of global fleet miles. The session highlighted FSD’s role as an assistance tool in its Supervised form, not a replacement, with the driver fully responsible at all times.

Advertisement
-->

Path to European rollout

Tesla has logged over 1 million kilometers of testing across 17 European countries, including Italy, to refine FSD for local conditions. The fact that Rome officials personally tested FSD Supervised bodes well for the program’s approval, as it suggests that key individuals are closely watching Tesla’s efforts and innovations.

Assessor Patanè also highlighted the administration’s interest in technologies that boost road safety and urban travel quality, viewing them as aids for both private and public transport while respecting rules.

Replies on X urged involving Italy’s Transport Ministry to speed approvals, with one user noting, “Great idea to involve the mayor! It would be necessary to involve components of the Ministry of Transport and the government as soon as possible: it’s they who can accelerate the approval of FSD in Italy.”

Continue Reading

News

Tesla FSD (Supervised) blows away French journalist after test ride

Cadot described FSD as “mind-blowing,” both for the safety of the vehicle’s driving and the “humanity” of its driving behaviors.

Published

on

Credit: Grok Imagine

Tesla’s Full Self-Driving (Supervised) seems to be making waves in Europe, with French tech journalist Julien Cadot recently sharing a positive first-hand experience from a supervised test drive in France. 

Cadot, who tested the system for Numerama after eight years of anticipation since early Autopilot trials, described FSD as “mind-blowing,” both for the safety of the vehicle’s driving and the “humanity” of its driving behaviors.

 

Julien Cadot’s FSD test in France

Cadot announced his upcoming test on X, writing in French: “I’m going to test Tesla’s FSD for Numerama in France. 8 years I’ve been waiting to relive the sensations of our very first contact with the unbridled Autopilot of the 2016s.” He followed up shortly after with an initial reaction, writing: “I don’t want to spoil too much because as media we were allowed to film everything and I have a huge video coming… But: it’s mind-blowing! Both for safety and for the ‘humanity’ of the choices.”

His later posts detailed FSD’s specific maneuvers that he found particularly compelling. These include the vehicle safely overtaking a delivery truck by inches, something Cadot said he personally would avoid to protect his rims, but FSD handled flawlessly. He also praised FSD’s cyclist overtakes, as the system always maintained the required 1.5-meter distance by encroaching on the opposite lane when clear. Ultimately, Cadot noted FSD’s decision-making prioritized safety and advancement, which is pretty remarkable.

Advertisement
-->

FSD’s ‘human’ edge over Autopilot

When asked if FSD felt light-years ahead of standard Autopilot, Cadot replied: “It’s incomparable, it’s not the same language.” He elaborated on scenarios like bypassing a parked delivery truck across a solid white line, where FSD assessed safety and proceeded just as a human driver might, rather than halting indefinitely. This “humanity” impressed Cadot the most, as it allowed FSD to fluidly navigate real-world chaos like urban Paris traffic. 

Tesla is currently hard at work pushing for the rollout of FSD to several European countries. Recent reports have revealed that Tesla has received approval to operate 19 FSD test vehicles on Spain’s roads, though this number could increase as the program develops. As per the Dirección General de Tráfico (DGT), Tesla would be able to operate its FSD fleet on any national route across Spain. Recent job openings also hint at Tesla starting FSD tests in Austria. Apart from this, the company is also holding FSD demonstrations in Germany, France, and Italy.

Continue Reading

Elon Musk

Tesla Optimus shows off its newest capability as progress accelerates

Published

on

Credit: Tesla

Tesla Optimus showed off its newest capability as progress on the project continues to accelerate toward an ultimate goal of mass production in the coming years.

Tesla is still developing Optimus and preparing for the first stages of mass production, where units would be sold and shipped to customers. CEO Elon Musk has always marketed the humanoid robot as the biggest product in history, even outside of Tesla, but of all time.

He believes it will eliminate the need to manually perform monotonous tasks, like cleaning, mowing the lawn, and folding laundry.

However, lately, Musk has revealed even bigger plans for Optimus, including the ability to relieve humans of work entirely within the next 20 years.

Development at Tesla’s Artificial Intelligence and Robotics teams has progressed, and a new video was shown of the robot taking a light jog with what appeared to be some pretty natural form:

Optimus has also made several public appearances lately, including one at the Neural Information Processing Systems, or NeurIPS Conference. Some spectators shared videos of Optimus’s charging rig, as well as its movements and capabilities, most interestingly, the hand:

The hand, forearm, and fingers have been one of the most evident challenges for Tesla in recent times, especially as it continues to work on its 3rd Generation iteration of Optimus.

Musk said during the Q3 Earnings Call:

“I don’t want to downplay the difficulty, but it’s an incredibly difficult thing, especially to create a hand that is as dexterous and capable as the human hand, which is incredible. The human hand is an incredible thing. The more you study the human hand, the more incredible you realize it is, and why you need four fingers and a thumb, why the fingers have certain degrees of freedom, why the various muscles are of different strengths, and fingers are of different lengths. It turns out that those are all there for a reason.”

The interesting part of the Optimus program so far is the fact that Tesla has made a lot of progress with other portions of the project, like movement, for example, which appears to have come a long way.

However, without a functional hand and fingers, Optimus could be rendered relatively useless, so it is evident that it has to figure this crucial part out first.

Continue Reading