Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Advertisement
-->

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

News

Tesla aims to combat common Full Self-Driving problem with new patent

Tesla writes in the patent that its autonomous and semi-autonomous vehicles are heavily reliant on camera systems to navigate and interact with their environment.

Published

on

Credit: @samsheffer | x

Tesla is aiming to combat a common Full Self-Driving problem with a new patent.

One issue with Tesla’s vision-based approach is that sunlight glare can become a troublesome element of everyday travel. Full Self-Driving is certainly an amazing technology, but there are still things Tesla is aiming to figure out with its development.

Unfortunately, it is extremely difficult to get around this issue, and even humans need ways to combat it when they’re driving, as we commonly use sunglasses or sun visors to give us better visibility.

Cameras obviously do not have these ways to fight sunglare, but a new patent Tesla recently had published aims to fight this through a “glare shield.”

Tesla writes in the patent that its autonomous and semi-autonomous vehicles are heavily reliant on camera systems to navigate and interact with their environment.

Advertisement
-->

The ability to see surroundings is crucial for accurate performance, and glare is one element of interference that has yet to be confronted.

Tesla described the patent, which will utilize “a textured surface composed of an array of micro-cones, or cone-shaped formations, which serve to scatter incident light in various directions, thereby reducing glare and improving camera vision.”

The patent was first spotted by Not a Tesla App.

The design of the micro-cones is the first element of the puzzle to fight the excess glare. The patent says they are “optimized in size, angle, and orientation to minimize Total Hemispherical Reflectance (THR) and reflection penalty, enhancing the camera’s ability to accurately interpret visual data.”

Additionally, there is an electromechanical system for dynamic orientation adjustment, which will allow the micro-cones to move based on the angle of external light sources.

Advertisement
-->

This is not the only thing Tesla is mulling to resolve issues with sunlight glare, as it has also worked on two other ways to combat the problem. One thing the company has discussed is a direct photon count.

CEO Elon Musk said during the Q2 Earnings Call:

“We use an approach which is direct photon count. When you see a processed image, so the image that goes from the sort of photon counter — the silicon photon counter — that then goes through a digital signal processor or image signal processor, that’s normally what happens. And then the image that you see looks all washed out, because if you point the camera at the sun, the post-processing of the photon counting washes things out.”

Future Hardware iterations, like Hardware 5 and Hardware 6, could also integrate better solutions for the sunglare issue, such as neutral density filters or heated lenses, aiming to solve glare more effectively.

Advertisement
-->
Continue Reading

Elon Musk

Delaware Supreme Court reinstates Elon Musk’s 2018 Tesla CEO pay package

The unanimous decision criticized the prior total rescission as “improper and inequitable,” arguing that it left Musk uncompensated for six years of transformative leadership at Tesla.

Published

on

Gage Skidmore, CC BY-SA 4.0 , via Wikimedia Commons

The Delaware Supreme Court has overturned a lower court ruling, reinstating Elon Musk’s 2018 compensation package originally valued at $56 billion but now worth approximately $139 billion due to Tesla’s soaring stock price. 

The unanimous decision criticized the prior total rescission as “improper and inequitable,” arguing that it left Musk uncompensated for six years of transformative leadership at Tesla. Musk quickly celebrated the outcome on X, stating that he felt “vindicated.” He also shared his gratitude to TSLA shareholders.

Delaware Supreme Court makes a decision

In a 49-page ruling Friday, the Delaware Supreme Court reversed Chancellor Kathaleen McCormick’s 2024 decision that voided the 2018 package over alleged board conflicts and inadequate shareholder disclosures. The high court acknowledged varying views on liability but agreed rescission was excessive, stating it “leaves Musk uncompensated for his time and efforts over a period of six years.”

The 2018 plan granted Musk options on about 304 million shares upon hitting aggressive milestones, all of which were achieved ahead of time. Shareholders overwhelmingly approved it initially in 2018 and ratified it once again in 2024 after the Delaware lower court struck it down. The case against Musk’s 2018 pay package was filed by plaintiff Richard Tornetta, who held just nine shares when the compensation plan was approved.

A hard-fought victory

As noted in a Reuters report, Tesla’s win avoids a potential $26 billion earnings hit from replacing the award at current prices. Tesla, now Texas-incorporated, had hedged with interim plans, including a November 2025 shareholder-approved package potentially worth $878 billion tied to Robotaxi and Optimus goals and other extremely aggressive operational milestones.

Advertisement
-->

The saga surrounding Elon Musk’s 2018 pay package ultimately damaged Delaware’s corporate appeal, prompting a number of high-profile firms, such as Dropbox, Roblox, Trade Desk, and Coinbase, to follow Tesla’s exodus out of the state. What added more fuel to the issue was the fact that Tornetta’s legal team, following the lower court’s 2024 decision, demanded a fee request of more than $5.1 billion worth of TSLA stock, which was equal to an hourly rate of over $200,000.

Delaware Supreme Court Elon Musk 2018 Pay Package by Simon Alvarez

Continue Reading

News

Tesla Cybercab tests are going on overdrive with production-ready units

Tesla is ramping its real-world tests of the Cybercab, with multiple sightings of the vehicle being reported across social media this week.

Published

on

Credit: @JT59052914/X

Tesla is ramping its real-world tests of the Cybercab, with multiple sightings of the autonomous two-seater being reported across social media this week. Based on videos of the vehicle that have been shared online, it appears that Cybercab tests are underway across multiple states.

Recent Cybercab sightings

Reports of Cybercab tests have ramped this week, with a vehicle that looked like a production-ready prototype being spotted at Apple’s Visitor Center in California. The vehicle in this sighting was interesting as it was equipped with a steering wheel. The vehicle also featured some changes to the design of its brake lights.

The Cybercab was also filmed testing at the Fremont factory’s test track, which also seemed to involve a vehicle that looked production-ready. This also seemed to be the case for a Cybercab that was spotted in Austin, Texas, which happened to be undergoing real-world tests. Overall, these sightings suggest that Cybercab testing is fully underway, and the vehicle is really moving towards production.

Production design all but finalized?

Recently, a near-production-ready Cybercab was showcased at Tesla’s Santana Row showroom in San Jose. The vehicle was equipped with frameless windows, dual windshield wipers, powered butterfly door struts, an extended front splitter, an updated lightbar, new wheel covers, and a license plate bracket. Interior updates include redesigned dash/door panels, refined seats with center cupholders, updated carpet, and what appeared to be improved legroom.

There seems to be a pretty good chance that the Cybercab’s design has been all but finalized, at least considering Elon Musk’s comments at the 2025 Annual Shareholder Meeting. During the event, Musk confirmed that the vehicle will enter production around April 2026, and its production targets will be quite ambitious. 

Advertisement
-->
Continue Reading