Tesla Autopilot now enables the car to perceive space around it

Credit: Ashok Elluswamy

Tesla Autopilot is now enabling the car to perceive the space around it thanks to the development of its Occupancy Networks. Tesla’s Autopilot Software Director, Ashok Elluswamy, shared a detailed thread on Twitter about a recent workshop the Autopilot team held. He also shared the workshop on Twitter.

In the video and Twitter Thread, Ashok explained how Tesla developed Occupancy Networks to literally give the car a sense of its surroundings. Humans have the ability to understand the objects around them at any given time. Is that car down the road moving at a slow speed or a fast speed? Do I, a pedestrian, have enough time to get across the street before being hit? What is that in the middle of the road? What is that falling from the sky? I should move out the way.

These reactions to scenarios and split-second decisions come naturally to humans. Tesla’s Autopilot Team is working to program the vehicles to do the same thing and this will save lives. Imagine the car being able to correctly detect its surroundings while the driver isn’t even paying attention. An example is sudden unintended accelerations (SUA). Ashok pointed out that Autopilot prevents around 40 of these types of accidents daily.

The workshop was held in June at this year’s Conference on Computer Vision and Pattern Recognition (CVPR.) in New Orleans. Ashok explained that the team developed Occupancy Networks which enable the car to predict the volumetric occupancy of everything around it.

Ashok explained that the typical approaches such as image-space segmentation of free space or pixel-wise depth have many issues. The solution to those issues is Occupancy Networks.

In other words, Occupancy Networks enable the car to perceive the space around it and determine whether or not it can drive in that space. For example, if a UFO were to suddenly crash in front of you while you’re driving, you would react quickly in the safest way possible. This is what the Autopilot Team is training the software to do.

Ashok shared details of how Occupancy Networks used Neural Radience Fields (NeRFs). “The occupancy representation of these networks allows for differentiable rendering of images (based on the Neural Radiance Fields work). However, unlike typical NeRFs, which are per scene, these occupancy nets generalize across scenes.”

You can read Ashok’s full Twitter thread here and you can watch his presentation here. We are a little over a month before Tesla’s AI Day and I’m sure Tesla will share more about the life-saving technology it is working on as well as the Optimus Bot.

Dr. Know It All recently published a video about the new 10.69 update and shared his thought about Occupancy Network.

In a message on Twitter, he told me, “The beauty of Occupancy Networks is that the car doesn’t have to know what the objects it sees are, it just has to know that they are there in order to avoid them!”

Note: Johnna is a Tesla shareholder and supports its mission. 

Your feedback is important. If you have any comments, concerns, or see a typo, you can email me at johnna@teslarati.com. You can also reach me on Twitter @JohnnaCrider1

Johnna Crider: Johnna Crider is a Baton Rouge writer covering Tesla, Elon Musk, EVs, and clean energy & supports Tesla's mission. Johnna also interviewed Elon Musk and you can listen here
Related Post
Disqus Comments Loading...