News
Google’s DeepMind unit develops AI that predicts 3D layouts from partial images
Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.
According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.
The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.
Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.
As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]
In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.
During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN.
Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.
News
Tesla Model 3 named New Zealand’s best passenger car of 2025
Tesla flipped the switch on Full Self-Driving (Supervised) in September, turning every Model 3 and Model Y into New Zealand’s most advanced production car overnight.
The refreshed Tesla Model 3 has won the DRIVEN Car Guide AA Insurance NZ Car of the Year 2025 award in the Passenger Car category, beating all traditional and electric rivals.
Judges praised the all-electric sedan’s driving dynamics, value-packed EV tech, and the game-changing addition of Full Self-Driving (Supervised) that went live in New Zealand this September.
Why the Model 3 clinched the crown
DRIVEN admitted they were late to the “Highland” party because the updated sedan arrived in New Zealand as a 2024 model, just before the new Model Y stole the headlines. Yet two things forced a re-evaluation this year.
First, experiencing the new Model Y reminded testers how many big upgrades originated in the Model 3, such as the smoother ride, quieter cabin, ventilated seats, rear touchscreen, and stalk-less minimalist interior. Second, and far more importantly, Tesla flipped the switch on Full Self-Driving (Supervised) in September, turning every Model 3 and Model Y into New Zealand’s most advanced production car overnight.
FSD changes everything for Kiwi buyers
The publication called the entry-level rear-wheel-drive version “good to drive and represents a lot of EV technology for the money,” but highlighted that FSD elevates it into another league. “Make no mistake, despite the ‘Supervised’ bit in the name that requires you to remain ready to take control, it’s autonomous and very capable in some surprisingly tricky scenarios,” the review stated.
At NZ$11,400, FSD is far from cheap, but Tesla also offers FSD (Supervised) on a $159 monthly subscription, making the tech accessible without the full upfront investment. That’s a game-changer, as it allows users to access the company’s most advanced system without forking over a huge amount of money.
News
Tesla starts rolling out FSD V14.2.1 to AI4 vehicles including Cybertruck
FSD V14.2.1 was released just about a week after the initial FSD V14.2 update was rolled out.
It appears that the Tesla AI team burned the midnight oil, allowing them to release FSD V14.2.1 on Thanksgiving. The update has been reported by Tesla owners with AI4 vehicles, as well as Cybertruck owners.
For the Tesla AI team, at least, it appears that work really does not stop.
FSD V14.2.1
Initial posts about FSD V14.2.1 were shared by Tesla owners on social media platform X. As per the Tesla owners, V14.2.1 appears to be a point update that’s designed to polish the features and capacities that have been available in FSD V14. A look at the release notes for FSD V14.2.1, however, shows that an extra line has been added.
“Camera visibility can lead to increased attention monitoring sensitivity.”
Whether this could lead to more drivers being alerted to pay attention to the roads more remains to be seen. This would likely become evident as soon as the first batch of videos from Tesla owners who received V14.21 start sharing their first drive impressions of the update. Despite the update being released on Thanksgiving, it would not be surprising if first impressions videos of FSD V14.2.1 are shared today, just the same.
Rapid FSD releases
What is rather interesting and impressive is the fact that FSD V14.2.1 was released just about a week after the initial FSD V14.2 update was rolled out. This bodes well for Tesla’s FSD users, especially since CEO Elon Musk has stated in the past that the V14.2 series will be for “widespread use.”
FSD V14 has so far received numerous positive reviews from Tesla owners, with numerous drivers noting that the system now drives better than most human drivers because it is cautious, confident, and considerate at the same time. The only question now, really, is if the V14.2 series does make it to the company’s wide FSD fleet, which is still populated by numerous HW3 vehicles.
News
Waymo rider data hints that Tesla’s Cybercab strategy might be the smartest, after all
These observations all but validate Tesla’s controversial two-seat Cybercab strategy, which has caught a lot of criticism since it was unveiled last year.
Toyota Connected Europe designer Karim Dia Toubajie has highlighted a particular trend that became evident in Waymo’s Q3 2025 occupancy stats. As it turned out, 90% of the trips taken by the driverless taxis carried two or fewer passengers.
These observations all but validate Tesla’s controversial two-seat Cybercab strategy, which has caught a lot of criticism since it was unveiled last year.
Toyota designer observes a trend
Karim Dia Toubajie, Lead Product Designer (Sustainable Mobility) at Toyota Connected Europe, analyzed Waymo’s latest California Public Utilities Commission filings and posted the results on LinkedIn this week.
“90% of robotaxi trips have 2 or less passengers, so why are we using 5-seater vehicles?” Toubajie asked. He continued: “90% of trips have 2 or less people, 75% of trips have 1 or less people.” He accompanied his comments with a graphic showing Waymo’s occupancy rates, which showed 71% of trips having one passenger, 15% of trips having two passengers, 6% of trips having three passengers, 5% of trips having zero passengers, and only 3% of trips having four passengers.
The data excludes operational trips like depot runs or charging, though Toubajie pointed out that most of the time, Waymo’s massive self-driving taxis are really just transporting 1 or 2 people, at times even no passengers at all. “This means that most of the time, the vehicle being used significantly outweighs the needs of the trip,” the Toyota designer wrote in his post.
Cybercab suddenly looks perfectly sized
Toubajie gave a nod to Tesla’s approach. “The Tesla Cybercab announced in 2024, is a 2-seater robotaxi with a 50kWh battery but I still believe this is on the larger side of what’s required for most trips,” he wrote.
With Waymo’s own numbers now proving 90% of demand fits two seats or fewer, the wheel-less, lidar-free Cybercab now looks like the smartest play in the room. The Cybercab is designed to be easy to produce, with CEO Elon Musk commenting that its product line would resemble a consumer electronics factory more than an automotive plant. This means that the Cybercab could saturate the roads quickly once it is deployed.
While the Cybercab will likely take the lion’s share of Tesla’s ride-hailing passengers, the Model 3 sedan and Model Y crossover would be perfect for the remaining 9% of riders who require larger vehicles. This should be easy to implement for Tesla, as the Model Y and Model 3 are both mass-market vehicles.
