

News
Google’s DeepMind unit develops AI that predicts 3D layouts from partial images
Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.
According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.
The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.
Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.
As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]
In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.
During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN.
Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.
News
Tesla Model 3 gets perfect 5-star Euro NCAP safety rating
Tesla prides itself on producing some of the safest vehicles on the road today.

Tesla prides itself on producing some of the safest vehicles on the road today. Based on recent findings from the Euro NCAP, the 2025 Model 3 sedan continues this tradition, with the vehicle earning a 5-star overall safety rating from the agency.
Standout Safety Features
As could be seen on the Euro NCAP’s official website, the 2025 Model 3 achieved an overall score of 90% for Adult Occupants, 93% for Child Occupants, 89% for Vulnerable Road Users, and 87% for Safety Assist. This rating, as per the Euro NCAP, applies to the Model 3 Rear Wheel Drive, Long Range Rear Wheel Drive, Long Range All Wheel Drive, and Performance All Wheel Drive.
The Euro NCAP highlighted a number of the Model 3’s safety features, such as its Active Hood, which automatically lifts during collisions to mitigate injury risks to vulnerable road users, and Automatic Emergency Braking System, which now detects motorcycles through an upgraded algorithm. The Euro NCAP also mentioned the Model 3’s feature that prevents initial door opening if someone is approaching the vehicle’s blind spot.
Standout Safety Features
In a post on its official Tesla Europe & Middle East account, Tesla noted that the company is also introducing new features that make the Model 3 even safer than it is today. These include functions like head-on collision avoidance and crossing traffic AEB, as well as Child Left Alone Detection, among other safety features.
“We also introduced new features to improve Safety Assist functionality even further – like head-on collision avoidance & crossing traffic AEB – to detect & respond to potential hazards faster, helping avoid accidents in the first place.
“Lastly, we released Child Left Alone Detection – if an unattended child is detected, the vehicle will turn on HVAC & alert caregivers via phone app & the vehicle itself (flashing lights/audible alert). Because we’re using novel in-cabin radar sensing, your Tesla is able to distinguish between adult vs child – reduced annoyance to adults, yet critical safety feature for kids,” Tesla wrote in its post on X.
Below is the Euro NCAP’s safety report on the 2025 Tesla Model 3 sedan.
Euroncap 2025 Tesla Model 3 Datasheet by Simon Alvarez on Scribd
Elon Musk
USDOT Secretary visits Tesla Giga Texas, hints at national autonomous vehicle standards
The Transportation Secretary also toured the factory’s production lines and spoke with CEO Elon Musk.

United States Department of Transportation (USDOT) Secretary Sean Duffy recently visited Tesla’s Gigafactory Texas complex, where he toured the factory’s production lines and spoke with CEO Elon Musk. In a video posted following his Giga Texas visit, Duffy noted that he believes there should be a national standard for autonomous vehicles in the United States.
Duffy’s Giga Texas Visit
As could be seen in videos of his Giga Texas visit, the Transportation Secretary seemed to appreciate the work Tesla has been doing to put the United States in the forefront of innovation. “Tesla is one of the many companies helping our country reach new heights. USDOT will be right there all the way to make sure Americans stay safe,” Duffy wrote in a post on X.
He also praised Tesla for its autonomous vehicle program, highlighting that “We need American companies to keep innovating so we can outcompete the rest of the world.”
National Standard
While speaking with Tesla CEO Elon Musk, the Transportation Secretary stated that other autonomous ride-hailing companies have been lobbying for a national standard for self-driving cars. Musk shared the sentiment, stating that “It’d be wonderful for the United States to have a national set of rules for autonomous driving as opposed to 50 independent sets of rules on a state-by-state rules basis.”
Duffy agreed with the CEO’s point, stating that, “You can’t have 50 different rules for 50 different states. You need one standard.” He also noted that the Transportation Department has asked autonomous vehicle companies to submit data. By doing so, the USDOT could develop a standard for the entire United States, allowing self-driving cars to operate in a manner that is natural and safe.
News
Tesla posts Optimus’ most impressive video demonstration yet
The humanoid robot was able to complete all the tasks through a single neural network.

When Elon Musk spoke with CNBC’s David Faber in an interview at Giga Texas, he reiterated the idea that Optimus will be one of Tesla’s biggest products. Seemingly to highlight the CEO’s point, the official Tesla Optimus account on social media platform X shared what could very well be the most impressive demonstration of the humanoid robot’s capabilities to date.
Optimus’ Newest Demonstration
In its recent video demonstration, the Tesla Optimus team featured the humanoid robot performing a variety of tasks. These include household chores such as throwing the trash, using a broom and a vacuum cleaner, tearing a paper towel, stirring a pot of food, opening a cabinet, and closing a curtain, among others. The video also featured Optimus picking up a Model X fore link and placing it on a dolly.
What was most notable in the Tesla Optimus team’s demonstration was the fact that the humanoid robot was able to complete all the tasks through a single neural network. The robot’s actions were also learned directly from Optimus being fed data from first-person videos of humans performing similar tasks. This system should pave the way for Optimus to learn and refine new skills quickly and reliably.
Tesla VP for Optimus Shares Insight
In a follow-up post on X, Tesla Vice President of Optimus (Tesla Bot) Milan Kovac stated that one of the team’s goals is to have Optimus learn straight from internet videos of humans performing tasks, including footage captured in third person or by random cameras.
“We recently had a significant breakthrough along that journey, and can now transfer a big chunk of the learning directly from human videos to the bots (1st person views for now). This allows us to bootstrap new tasks much faster compared to teleoperated bot data alone (heavier operationally).
“Many new skills are emerging through this process, are called for via natural language (voice/text), and are run by a single neural network on the bot (multi-tasking). Next: expand to 3rd person video transfer (aka random internet), and push reliability via self-play (RL) in the real-, and/or synthetic- (sim / world models) world,” Kovac wrote in his post on X.
-
News2 weeks ago
Tesla Cybertruck Range Extender gets canceled
-
Elon Musk6 days ago
Tesla seems to have fixed one of Full Self-Driving’s most annoying features
-
Lifestyle2 weeks ago
Anti-Elon Musk group crushes Tesla Model 3 with Sherman tank–with unexpected results
-
News2 weeks ago
Starlink to launch on United Airlines planes by May 15
-
News2 weeks ago
Tesla Semi gets new adoptee in latest sighting
-
News2 weeks ago
Tesla launches its most inexpensive trim of new Model Y
-
News2 weeks ago
US’ base Tesla Model Y has an edge vs Shanghai and Berlin’s entry-level Model Ys
-
News2 weeks ago
Tesla Cybertruck owners get amazing year-long freebie