Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

Advertisement

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

The Starship V3 static fire everyone was waiting for just happened

SpaceX fired all 33 Raptor 3 engines on Starship V3 today clearing the path for Flight 12.

Published

on

By

SpaceX Starship V3 from Starbase, Texas on April 14, 2026

SpaceX is that much closer to launching their next-gen Starship after completing today’s full duration static fire of all 33 Raptor 3 engines out of Starbase, Texas. This marks the most powerful rocket engine test ever conducted and a direct signal that Flight 12, the maiden voyage of Starship V3, is imminent. SpaceX confirmed the test on X, posting that the full duration firing was completed ahead of the vehicle’s next flight test.

The road to today started on March 16, when Booster 19 completed a shorter 10-engine static fire, also at the newly constructed Pad 2. That test ended early due to a ground systems issue but confirmed all installed Raptor 3 engines started cleanly. Booster 19 returned to the Mega Bay, received its remaining 23 engines for a full complement of 33, and rolled back out this week for the complete test campaign. Musk confirmed earlier this month that Flight 12 is now 4 to 6 weeks away.

Countdown: America is going back to the Moon and SpaceX holds the key to what comes after

The numbers behind today’s test are genuinely hard to put in context. Each Raptor 3 engine produces roughly 280 tons of thrust, and with all 33 firing simultaneously, this generates approximately 9,240 tons of combined thrust, more than any rocket in history. For context, that’s enough thrust to lift the entire Empire State Building, and then some. V3 stands 408 feet tall and can carry over 100 tons to low Earth orbit in a fully reusable configuration. The V2 generation topped out at around 35 tons.

Advertisement

Historically, a successful full-duration static fire is the last major ground milestone before launch. SpaceX has followed this pattern with every Starship iteration since the program began in 2023.  Musk has been direct about the ambition behind all of it. “I am highly confident that the V3 design will achieve full reusability,” he wrote on X earlier this year. Full reusability of both stages is the foundation of SpaceX’s plan to make regular flights to the Moon and Mars economically viable. Today’s test brings that goal one significant step closer.


Starship V3 delivers on two most critical promises of full reusability and in-orbit refueling. The reusability case is straightforward, and one we have seen with Falcon 9 wherein the rocket can fly again within a day rather than building a new one for every mission. It’s the only economic model that makes frequent lunar cargo runs viable. The in-orbit refueling piece is less obvious but equally essential. To reach the Moon with enough payload, Starship requires roughly ten dedicated tanker flights to fuel up a propellant depot in low Earth orbit before it can even begin its journey to the lunar surface. That capability has never been demonstrated at scale, and Flight 12 is the first step toward proving it works. As Teslarati reported, NASA’s Artemis II crew completed a historic lunar flyby earlier this month, the first humans to travel beyond low Earth orbit since 1972, but getting astronauts to actually land and eventually supply a permanent Moon base requires a cargo pipeline that only a fully reusable, refuelable Starship V3 can deliver at the volume and cost NASA’s plans demand.

SpaceX Starship full duration static fire on April 14, 2026 from Starbase, Texas (Credit: SpaceX)

SpaceX Starship full duration static fire on April 14, 2026 from Starbase, Texas (Credit: SpaceX)

Continue Reading

News

Tesla Full Self-Driving shows stunning maneuver in Europe to silence skeptics

In a striking demonstration of autonomous driving prowess, Tesla’s Full Self-Driving (FSD) system recently showcased its capabilities on the narrow rural roads of the Netherlands. Captured in two in-car videos, the system encountered scenarios that would challenge even the most experienced human drivers.

Published

on

Credit: Tesla

Tesla Full Self-Driving, fresh on the heels of its approval for operation on European roads for the first time, showed off a stunning maneuver that will certainly silence any skeptics on the continent.

Fresh off its approval in the Netherlands, Full Self-Driving is working toward a significant expansion into more parts of Europe.

In a striking demonstration of autonomous driving prowess, Tesla’s Full Self-Driving (FSD) system recently showcased its capabilities on the narrow rural roads of the Netherlands. Captured in two in-car videos, the system encountered scenarios that would challenge even the most experienced human drivers.

In the first clip, a wide tractor occupied more than half the lane on a tight two-way road. Rather than braking abruptly or forcing a collision risk, FSD smoothly edged the vehicle onto the adjacent bike path—using the extra space with precision—before seamlessly returning to the lane once clear.

Advertisement

The second clip was equally demanding: while overtaking a group of cyclists, an oncoming car approached at speed.

FSD maintained a safe, minimal buffer to the cyclists while timing the pass perfectly, avoiding any swerve or hesitation that could unsettle passengers or other road users.

Advertisement

This maneuver highlights FSD’s advanced spatial reasoning and predictive planning. On roads often under three meters wide, with no room for error, the system calculated available clearance in real time, incorporated shoulder and path geometry, and executed a controlled deviation without compromising safety.

It treated the bike path as a legitimate extension of navigable space, something many drivers might hesitate to do, while respecting Dutch road norms and cyclist priority.

Such feats align closely with a growing library of impressive FSD maneuvers documented on camera worldwide.

In urban Amsterdam, for instance, FSD has navigated the world’s densest cyclist environments, weaving through hundreds of unpredictable bike movements on canal-side streets with tram tracks and pedestrians.

Advertisement

One uncut drive showed it yielding smoothly at crossings, overtaking where needed, and even handling a near-perfect auto-park in a tight residential spot, demonstrating the same low-speed precision seen in the rural clips.

Teslas using FSD have tackled turbo roundabouts in the Netherlands, complex multi-lane circles notorious for geometry challenges, merging confidently while yielding to traffic. Similar clips depict smooth handling of construction zones, emergency vehicle pull-overs, and gated parking barriers, where the car stops precisely, waits for clearance, and proceeds without driver input.

Collectively, these examples illustrate FSD’s evolution toward handling the unpredictable.

The rural Netherlands maneuvers aren’t isolated. Instead, they reflect a pattern of spatial awareness, cyclist deference, and traffic anticipation seen from city streets to highways.

Advertisement

As FSD continues refining through real-world data, videos like this one are certainly building a compelling case for its readiness on Europe’s varied roads.

Continue Reading

News

Tesla utilizes its ‘Rave Cave’ for new awesome safety feature

Part of the massive interior overhaul of both the Model 3 “Highland” and Model Y “Juniper” was the addition of interior accent lighting to help bring out the mood of the vehicle, increase the customization of the interior, and to create a unique listening experience.

Published

on

Credit: Tesla | X

Tesla is utilizing its ‘Rave Cave’ for an awesome new safety feature that will arrive with the upcoming Spring Update for 2026.

Part of the massive interior overhaul of both the Model 3 “Highland” and Model Y “Juniper” was the addition of interior accent lighting to help bring out the mood of the vehicle, increase the customization of the interior, and to create a unique listening experience.

Tesla added a Sync Lights feature that will strobe the accent strips with the beat of the music.

It is one of the most unique and one of the coolest non-functional features of a Tesla, as it does not improve the driving of the vehicle, but makes it a cool and personal addition to the interior.

Advertisement

However, Tesla is going to take it one step further, as the Rave Cave lights will now be used for blind spot recognition. This feature will be added as the Spring 2026 Update starts to roll out.

Advertisement

Tesla writes:

“Accent lights now turn red when an object is in your blind spot and your turn signal is engaged, or when an approaching object is detected while parked.”

This neat new safety feature will now increase the likelihood of a driver, who is operating their Tesla manually, of seeing the blind spot warnings that are currently available on the A pillar and on the center touchscreen.

These new alerts will now warn drivers of cross traffic as they back out of a parking space with little to no visibility of what is coming. It is a great new addition that will only increase the safety of the vehicles, while also utilizing something that is already installed in these specific Model 3 and Model Y units.

Advertisement

The Model 3 and Model Y were the central focus of the Spring 2026 Update, especially considering the fact that the Model S and Model X are basically gone, with only a few hundred units left. Additionally, Tesla included new Immersive Sound and Car Visualization for the Model 3 and Model Y specifically in this new update.

Continue Reading