News

Tesla’s Neural Network adaptability to hardware highlighted in new patent application

(Credit: Tesla Driver/YouTube)

Tesla’s developments in the artificial intelligence arena are one of the most important aspects of its current and future technology, and this includes adapting neural networks to various hardware platforms. A recent patent publication titled “System and Method for Adapting a Neural Network Model On a Hardware Platform” provides a bit of insight into how the electric car maker is taking on the challenge.

In general, a neural network is a set of algorithms designed to gather data and recognize patterns from it. The particular data being collected depends on the platform involved and what kind of information it can send to the network, i.e., cameras/image data, etc. Differences between platforms mean differences in the neural network algorithms, and adapting them is something time consuming for developers. Just as apps have to be programmed to work based on the operating system or hardware on a phone or tablet, for example, so too do neural networks. Tesla’s answer to the adaptation issue is automation (of course).

During the adaptation process of a neural network to specific hardware, decisions must be made by a software developer based on available options built into the hardware being used. Each of these options, in turn, usually requires research, hardware documentation review, and impact analysis, with each set of options chosen, eventually adding up to a configuration for the neural network to use. Tesla’s application calls these options “decision points,” and they are a vital part of how their invention functions.

Credit: Tesla/USPTO

According to the application, after plugging in a neural network model and the specific hardware platform information for adaptation, software code traverses the network to learn where the decision points are, then runs the hardware parameters against those points to provide available configurations. More specifically, the software method looks at the hardware constraints (such as processing resources and performance metrics) and generates setups for the neural network that will satisfy the requirements for it to operate correctly. From the application:

In order to produce a concrete implementation of an abstract neural network, a number of implementation decisions about one or more of system’s data layout, numerical precision, algorithm selection, data padding, accelerator use, stride, and more may be made. These decisions may be made on a per-layer or per-tensor basis, so there can potentially be hundreds of decisions, or more, to make for a particular network. Embodiments of the invention take many factors into account before implementing the neural network because many configurations are not supported by underlying software or hardware platforms, and such configurations will result in an inoperable implementation.

Credit: Tesla/USPTO

Tesla’s invention also provides the ability to display the neural network configuration information on a graphical interface to make assessment and selection a bit more user friendly. For instance, different configurations could have different evaluation times, power consumption, or memory consumption. Perhaps an analogy for this process would be selecting configurations based on differences between Track Mode and Range Mode but instead for how you’d want your AI to work with your hardware.

This patent application looks to be one of the products of Tesla’s reported acquisition of DeepScale, an AI startup focused on Full Self Driving and designing neural networks for small devices. The listed inventor, Dr. Michael Driscoll, was a Senior Staff Engineer for DeepScale before transitioning to a Senior Software Engineer position at Tesla. Prior CEO of DeepScale, Dr. Forrest Iandola, also transitioned to Tesla as a Senior Staff Machine Learning Scientist before moving on to independent research this year.

Tesla’s Neural Network adaptability to hardware highlighted in new patent application
To Top