Connect with us

News

Tesla’s Neural Network adaptability to hardware highlighted in new patent application

(Credit: Tesla Driver/YouTube)

Published

on

Tesla’s developments in the artificial intelligence arena are one of the most important aspects of its current and future technology, and this includes adapting neural networks to various hardware platforms. A recent patent publication titled “System and Method for Adapting a Neural Network Model On a Hardware Platform” provides a bit of insight into how the electric car maker is taking on the challenge.

In general, a neural network is a set of algorithms designed to gather data and recognize patterns from it. The particular data being collected depends on the platform involved and what kind of information it can send to the network, i.e., cameras/image data, etc. Differences between platforms mean differences in the neural network algorithms, and adapting them is something time consuming for developers. Just as apps have to be programmed to work based on the operating system or hardware on a phone or tablet, for example, so too do neural networks. Tesla’s answer to the adaptation issue is automation (of course).

During the adaptation process of a neural network to specific hardware, decisions must be made by a software developer based on available options built into the hardware being used. Each of these options, in turn, usually requires research, hardware documentation review, and impact analysis, with each set of options chosen, eventually adding up to a configuration for the neural network to use. Tesla’s application calls these options “decision points,” and they are a vital part of how their invention functions.

Credit: Tesla/USPTO

According to the application, after plugging in a neural network model and the specific hardware platform information for adaptation, software code traverses the network to learn where the decision points are, then runs the hardware parameters against those points to provide available configurations. More specifically, the software method looks at the hardware constraints (such as processing resources and performance metrics) and generates setups for the neural network that will satisfy the requirements for it to operate correctly. From the application:

In order to produce a concrete implementation of an abstract neural network, a number of implementation decisions about one or more of system’s data layout, numerical precision, algorithm selection, data padding, accelerator use, stride, and more may be made. These decisions may be made on a per-layer or per-tensor basis, so there can potentially be hundreds of decisions, or more, to make for a particular network. Embodiments of the invention take many factors into account before implementing the neural network because many configurations are not supported by underlying software or hardware platforms, and such configurations will result in an inoperable implementation.

Advertisement
Credit: Tesla/USPTO

Tesla’s invention also provides the ability to display the neural network configuration information on a graphical interface to make assessment and selection a bit more user friendly. For instance, different configurations could have different evaluation times, power consumption, or memory consumption. Perhaps an analogy for this process would be selecting configurations based on differences between Track Mode and Range Mode but instead for how you’d want your AI to work with your hardware.

This patent application looks to be one of the products of Tesla’s reported acquisition of DeepScale, an AI startup focused on Full Self Driving and designing neural networks for small devices. The listed inventor, Dr. Michael Driscoll, was a Senior Staff Engineer for DeepScale before transitioning to a Senior Software Engineer position at Tesla. Prior CEO of DeepScale, Dr. Forrest Iandola, also transitioned to Tesla as a Senior Staff Machine Learning Scientist before moving on to independent research this year.

Accidental computer geek, fascinated by most history and the multiplanetary future on its way. Quite keen on the democratization of space. | It's pronounced day-sha, but I answer to almost any variation thereof.

Advertisement
Comments

News

Ford is charging for a basic EV feature on the Mustang Mach-E

When ordering a new Ford Mustang Mach-E, you’ll now be hit with an additional fee for one basic EV feature: the frunk.

Published

on

Credit: Ford Motor Company

Ford is charging an additional fee for a basic EV feature on its Mustang Mach-E, its most popular electric vehicle offering.

Ford has shuttered its initial Model e program, but is venturing into a more controlled and refined effort, and it is abandoning the F-150 Lightning in favor of a new pickup that is currently under design, but appears to have some favorable features.

However, ordering a new Mustang Mach-E now comes with an additional fee for one basic EV feature: the frunk.

The frunk is the front trunk, and due to the lack of a large engine in the front of an electric vehicle, OEMs are able to offer additional storage space under the hood. There’s one problem, though, and that is that companies appear to be recognizing that they can remove it for free while offering the function for a fee.

Advertisement

Ford is charging $495 for the frunk.

Interestingly, the frunk size varies by vehicle, but the Mustang Mach-E features a 4.7 to 4.8 cubic-foot-sized frunk, which measures approximately 9 inches deep, 26 inches wide, and 14 inches high.

Advertisement

When the vehicle was first released, Ford marketed the frunk as the ultimate tailgating feature, showing it off as a perfect place to store and serve cold shrimp cocktail.

Ford Mach-E frunk is perfect for chowders and chicken wings, and we’re not even joking

It appears the decision to charge for what is a simple advantage of an EV is not going over well, as even Ford loyal customers say the frunk is a “basic expectation” of an EV. Without it, it seems as if fans feel the company is nickel-and-diming its customers.

It will be pretty interesting to see the Mach-E without a frunk, and while it should not be enough to turn people away from potentially buying the vehicle, it seems the decision to add an additional charge to include one will definitely annoy some customers.

Advertisement
Continue Reading

News

Tesla to improve one of its best features, coding shows

According to the update, Tesla will work on improving the headlights when coming into contact with highly reflective objects, including road signs, traffic signs, and street lights. Additionally, pixel-level dimming will happen in two stages, whereas it currently performs with just one, meaning on or off.

Published

on

Credit: @jojje167 on X

Tesla is looking to upgrade its Matrix Headlights, a unique and high-tech feature that is available on several of its vehicles. The headlights aim to maximize visibility for Tesla drivers while being considerate of oncoming traffic.

The Matrix Headlights Tesla offers utilize dimming of individual light pixels to ensure that visibility stays high for those behind the wheel, while also being considerate of other cars by decreasing the brightness in areas where other cars are traveling.

Here’s what they look like in action:

As you can see, the Matrix headlight system intentionally dims the area where oncoming cars would be impacted by high beams. This keeps visibility at a maximum for everyone on the road, including those who could be hit with bright lights in their eyes.

Advertisement

There are still a handful of complaints from owners, however, but Tesla appears to be looking to resolve these with the coming updates in a Software Version that is currently labeled 2026.2.xxx. The coding was spotted by X user BERKANT:

Advertisement

According to the update, Tesla will work on improving the headlights when coming into contact with highly reflective objects, including road signs, traffic signs, and street lights. Additionally, pixel-level dimming will happen in two stages, whereas it currently performs with just one, meaning on or off.

Finally, the new system will prevent the high beams from glaring back at the driver. The system is made to dim when it recognizes oncoming cars, but not necessarily objects that could produce glaring issues back at the driver.

Tesla’s revolutionary Matrix headlights are coming to the U.S.

This upgrade is software-focused, so there will not need to be any physical changes or upgrades made to Tesla vehicles that utilize the Matrix headlights currently.

Advertisement
Continue Reading

Elon Musk

xAI’s Grok approved for Pentagon classified systems: report

Under the agreement, Grok can be deployed in systems handling classified intelligence analysis, weapons development, and battlefield operations. 

Published

on

xAI-supercomputer-memphis-environment-pushback
Credit: xAI

Elon Musk’s xAI has signed an agreement with the United States Department of Defense (DoD) to allow Grok to be used in classified military systems.

Previously, Anthropic’s Claude had been the only AI system approved for the most sensitive military work, but a dispute over usage safeguards has reportedly prompted the Pentagon to broaden its options, as noted in a report from Axios.

Under the agreement, Grok can be deployed in systems handling classified intelligence analysis, weapons development, and battlefield operations. 

The publication reported that xAI agreed to the Pentagon’s requirement that its technology be usable for “all lawful purposes,” a standard Anthropic has reportedly resisted due to alleged ethical restrictions tied to mass surveillance and autonomous weapons use.

Advertisement

Defense Secretary Pete Hegseth is scheduled to meet with Anthropic CEO Dario Amodei in what sources expect to be a tense meeting, with the publication hinting that the Pentagon could designate Anthropic a “supply chain risk” if the company does not lift its safeguards. 

Axios stated that replacing Claude fully might be technically challenging even if xAI or other alternative AI systems take its place. That being said, other AI systems are already in use by the DoD. 

Grok already operates in the Pentagon’s unclassified systems alongside Google’s Gemini and OpenAI’s ChatGPT. Google is reportedly close to an agreement that will result in Gemini being used for classified use, while OpenAI’s progress toward classified deployment is described as slower but still feasible. 

The publication noted that the Pentagon continues talks with several AI companies as it prepares for potential changes in classified AI sourcing.

Advertisement
Continue Reading