Connect with us

News

Google’s DeepMind unit develops AI that predicts 3D layouts from partial images

[Credit: Google DeepMind]

Published

on

Google’s DeepMind unit, the same division that created AlphaGo, an AI that outplayed the best Go player in the world, has created a neural network capable of rendering an accurate 3D environment from just a few still images, filling in the gaps with an AI form of perceptual intuition.

According to Google’s official DeepMind blog, the goal of its recent AI project is to make neural networks easier and simpler to train. Today’s most advanced AI-powered visual recognition systems are trained through the use of large datasets comprised of images that are human-annotated. This makes training a very tedious, lengthy, and expensive process, as every aspect of every object in each scene in the dataset has to be labeled by a person.

The DeepMind team’s new AI, dubbed the Generative Query Network (GQN) is designed to remove this dependency on human-annotated data, as the GQN is designed to infer a space’s three-dimensional layout and features despite being provided with only partial images of a space.

Similar to babies and animals, DeepMind’s GQN learns by making observations of the world around it. By doing so, DeepMind’s new AI learns about plausible scenes and their geometrical properties even without human labeling. The GQN is comprised of two parts — a representation network that produces a vector describing a scene and a generation network that “imagines” the scene from a previously unobserved viewpoint. So far, the results of DeepMind’s training for the AI have been encouraging, with the GQN being able to create representations of objects and rooms based on just a single image.

Advertisement

As noted by the DeepMind team, however, the training methods that have been used for the development of the GQN are still limited compared to traditional computer vision techniques. The AI creators, however, remain optimistic that as new sources of data become available and as improvements in hardware get introduced, the applications for the GQN framework could move over to higher-resolution images of real-world scenes. Ultimately, the DeepMind team believes that the GQN could be a useful system in technologies such as augmented reality and self-driving vehicles by giving them a form of perceptual intuition – extremely desirable for companies focused on autonomy, like Tesla.

Google DeepMind’s GQN AI in action. [Credit: Google DeepMind]

In a talk at Train AI 2018 last May, Tesla’s head of AI Andrej Karpathy discussed the challenges involved in training the company’s Autopilot system. Tesla trains Autopilot by feeding the system with massive data sets from the company’s fleet of vehicles. This data is collected through means such as Shadow Mode, which allows the company to gather statistical data to show false positives and false negatives of Autopilot software.

During his talk, Karpathy discussed how features such as blinker detection become challenging for Tesla’s neural network to learn, considering that vehicles on the road have their turn signals off most of the time and blinkers have a high variability from one car brand to another. Karpathy also discussed how Tesla has transitioned a huge portion of its AI team to labeling roles, doing the human annotation that Google DeepMind explicitly wants to avoid with the GQN. 

Musk also mentioned that its upcoming all-electric supercar — the next-generation Tesla Roadster — would feature an “Augmented Mode” that would enhance drivers’ capability to operate the high-performance vehicle. With Tesla’s flagship supercar seemingly set on embracing AR technology, the emergence of new techniques for training AI such as Google DeepMind’s GQN would be a perfect fit for the next generation of vehicles about to enter the automotive market.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Elon Musk shares timeframe for X Money early public access rollout

X Money is expected to enable financial transactions within the app, expanding the platform’s capabilities beyond social media features.

Published

on

Credit: UK Government, CC BY 2.0 , via Wikimedia Commons

Elon Musk has stated that X Money, the digital payments system being developed for social media platform X, is expected to enter early public access next month. 

The update was shared by Musk in a post on X. “𝕏 Money early public access will launch next month,” Musk wrote in his post.

As noted in a Reuters report, X Money is being developed as a digital payment service that’s directly integrated into the X platform. 

The system is expected to enable financial transactions within the app, expanding the platform’s capabilities beyond social media features.

Advertisement

Musk has previously discussed plans to introduce payments and financial services as part of X’s broader development.

Since acquiring the platform in 2022, Musk has discussed expanding X to include a range of services such as messaging, media, and financial tools.

Elon Musk has shared his goal of transforming X into an “everything app.” During a previous podcast interview with members of the Tesla community, Musk mused about turning X into something similar to China’s WeChat, which allows users to shop, pay, communicate, and perform a variety of other tasks.

“In China, you do everything in WeChat… it’s kickass… Outside of China, there’s nothing like it, people live on one app. My idea would be like how about if we just copy WeChat,” Musk joked at the time.

Advertisement

To prepare for the rollout of X Money, X has partnered with payment company Visa to support the development of payment services for the platform’s users. The move could allow X to tap into the growing demand for digital and in-app financial transactions as the company builds additional services around its existing user base.

Continue Reading

News

Tesla Cybercab display highlights interior wizardry in the small two-seater

Photos and videos of the production Cybercab were shared in posts on social media platform X.

Published

on

Credit: Tesla Robotaxi/X

The Tesla Cybercab is currently on display at the U.S. Department of Transportation in Washington, D.C., and observations of the production vehicle are highlighting some of its notable design details. 

Photos and videos of the production Cybercab were shared in posts on social media platform X.

Observers of the Cybercab display unit noted that the two-seat Robotaxi provides unusually generous legroom for a vehicle of its size. Based on the vehicle’s video, the compact two-seater appears to offer more legroom than Tesla’s larger vehicles such as the Model Y, Model X, and Cybertruck.

The Cybercab’s layout allows Tesla to dedicate nearly the entire cabin to passengers. The vehicle is designed without a steering wheel or pedals, which helps maximize interior space.

Advertisement

Footage from the display also highlights the Cybercab’s large center screen, which is positioned prominently in front of the passenger bench. The display appears intended to provide entertainment and ride information while the vehicle operates autonomously.

Images of the vehicle also show an additional camera integrated into the Cybercab’s C-pillar. The extra camera appears to expand the vehicle’s field of view, which would be useful as Tesla works toward fully unsupervised Full Self-Driving.

Tesla engineers have previously explained that the Cybercab was designed to be highly efficient both in manufacturing and in operation. Cybercab Lead Engineer Eric E. stated in 2024 that the Robotaxi would be built with roughly half the number of parts used in a Model 3 sedan.

“Two seats unlocks a lot of opportunity aerodynamically. It also means we cut the part count of Cybercab down by a substantial margin. We’re gonna be delivering a car that has roughly half the parts of Model 3 today,” the Tesla engineer said.

Advertisement

The Tesla engineer also noted that the Cybercab’s cargo area can accommodate multiple golf bags, two carry-on suitcases, and two full-size checked bags. The trunk can also fit certain bicycles and a foldable wheelchair depending on size, which is quite impressive for a small car like the Cybercab.

Continue Reading

Elon Musk

Elon Musk’s xAI wins permit for power plant supporting AI data centers

The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).

Published

on

Mississippi regulators have approved a permit allowing Elon Musk’s artificial intelligence company xAI to construct a natural gas power plant in Southaven. The facility is expected to support the company’s expanding AI infrastructure tied to its Colossus data center operations near Memphis.

The development was reported by CNBC, citing confirmation from the Mississippi Department of Environmental Quality (MDEQ).

According to the report, regulators “voted to approve the permit” of xAI subsidiary MZX Tech LLC to construct a power plant featuring 41 natural gas-burning turbines “after careful consideration of all public comments and community concerns.”

The Mississippi Department of Environmental Quality stated that the permit followed a regulatory review process that included public comments and community input. Jaricus Whitlock, air division chief for the MDEQ, stated that the project met all applicable environmental standards.

Advertisement

“The proposed PSD permit in front of the board today not only meets all state and federal permitting regulations, but goes above and beyond what is required by law. MDEQ and the EPA agree that not a single person around our facilities will be exposed to unhealthy levels of air pollution,” Whitlock stated.

The planned facility will help provide electricity for xAI’s AI computing infrastructure in the Memphis region.

The Southaven project forms part of xAI’s efforts to scale computing capacity for its artificial intelligence systems.

The company currently operates two major data centers in Memphis, known as Colossus 1 and Colossus 2, which provide computing power for xAI’s Grok AI models. xAI is also planning to build another large data center in Southaven called Macrohardrr, which would be located in a warehouse previously used by GXO Logistics.

Advertisement

Large-scale AI training requires substantial computing power and electricity, prompting technology companies to develop dedicated energy infrastructure for their data centers.

SpaceX President Gwynne Shotwell previously stated that xAI plans to develop 1.2 gigawatts of power capacity for its Memphis-area AI supercomputer site as part of the federal government’s Ratepayer Protection Pledge. The commitment was announced during an event with United States President Donald Trump.

“As part of today’s commitment, we will take extensive additional steps to continue to reduce the costs of electricity for our neighbors. xAI will therefore commit to develop 1.2 GW of power as our supercomputer’s primary power source. That will be for every additional data center as well. We will expand what is already the largest global Megapack power installation in the world,” Shotwell said.

“The installation will provide enough backup power to power the city of Memphis, and more than sufficient energy to power the town of Southaven, Mississippi where the data center resides. We will build new substations and invest in electrical infrastructure to provide stability to the area’s grid.”

Advertisement
Continue Reading