Tesla cars will be smarter than humans by 2033, according to a new study by car and van leasing company, Vanarama. Vanarama performed an analysis of the processing power of Tesla’s microchips to forecast how many years it will take to be on par with the human brain.
The study looked into the processing power of Tesla’s “own AI brain” and compared it with its predecessors and the human brain. Some of the key findings include:
-
Tesla’s microchips will top the human brain (one quadrillion operations per second) in only 11 years (10.94), by 2033.
-
Tesla’s microchip capability is increasing at a rate of 486% per year.
-
Tesla would take 17 years to reach the level of a mature human brain – eight years quicker than we manage (25 years for human brain maturity).
-
Tesla’s D1 chip is 30 times more powerful than the chip they used only six years ago.

Vanarama found that Tesla’s microchip capability is increasing at a rate of 486% per year. The first chip it looked at was a 2016 NVIDIA component that managed 12 trillion operations per second, which is the measure of a computer’s processing power. Tesla’s latest D1 chip managed 362 trillion.
“At that rate, Tesla’s self-driving AI chip will top the human brain (one quadrillion operations per second) in only 11 years (10.94), by 2033,” Vanaram noted.
The company further explained that if you were to look at the growth rate from the first NVIDIA chip it analyzed, it shows that Tesla would take 17 years to reach the level of a mature human brain. This is eight years faster than humans reach brain maturity which is typically 25 years of age.

Tesla D1 chip 3X more powerful than a chip they used 6 years ago
Tesla’s D1 chip was unveiled during AI Day last year and was designed for the Dojo supercomputer. Tesla recently shared a fresh look at the microarchitecture of the Dojo supercomputer when it gave a presentation in New Orleans.
This year, Tesla will hold another AI Day event and it’s expected to release the new D1 chip and other interesting things such as a working prototype of the Optimus Bot. Vanarama took note of the D1 chip’s processing power and said that it was a “considerable increase in computing intelligence from the previous chip, Hardwar 3, which performed 144 trillion operations per second in 2019. Before that, it was the Hardware 2 on 72 trillion, and the Nvidia chip on 12 trillion.”
The Dojo ExaPOD supercomputer will use a total of 24 D1 chips which will make the system capable of just over one quintillion operations per second. For perspective, that number is written out as 1,086,000,000,000,000,000.
A glimpse of the future for AI chips
Take a look at the graphic above. Comparing the processing power of Tesla’s microchips. Vanarama said that in the time it has taken one to read it, Tesla’s microchips would have completed up to 7.6 quadrillion operations each.
“It wouldn’t be crazy to believe that tech will become significantly smarter than humans in our lifetime. Microchips are currently capable of working the way brain synapses do, with researchers developing chips that are inspired by the way the brain operates.”
You can learn more about Vanarama’s research here.
Note: Johnna is a Tesla shareholder and supports its mission.
Your feedback is important. If you have any comments, or concerns, or see a typo, you can email me at johnna@teslarati.com. You can also reach me on Twitter @JohnnaCrider1
Cybertruck
Tesla reveals its Cybertruck light bar installation fix
Tesla has revealed its Cybertruck light bar installation fix after a recall exposed a serious issue with the accessory.
Tesla and the National Highway Traffic Safety Administration (NHTSA) initiated a recall of 6,197 Cybertrucks back in October to resolve an issue with the Cybertruck light bar accessory. It was an issue with the adhesive that was provided by a Romanian company called Hella Romania S.R.L.
Tesla recalls 6,197 Cybertrucks for light bar adhesive issue
The issue was with the primer quality, as the recall report from the NHTSA had stated the light bar had “inadvertently attached to the windshield using the incorrect surface primer.”
Instead of trying to adhere the light bar to the Cybertruck with an adhesive, Tesla is now going to attach it with a bracketing system, which will physically mount it to the vehicle instead of relying on adhesive strips or glue.
Tesla outlines this in its new Service Bulletin, labeled SB-25-90-001, (spotted by Not a Tesla App) where it shows the light bar will be remounted more securely:
The entire process will take a few hours, but it can be completed by the Mobile Service techs, so if you have a Cybertruck that needs a light bar adjustment, it can be done without taking the vehicle to the Service Center for repair.
However, the repair will only happen if there is no delamination or damage present; then Tesla could “retrofit the service-installed optional off-road light bar accessory with a positive mechanical attachment.”
The company said it would repair the light bar at no charge to customers. The light bar issue was one that did not result in any accidents or injuries, according to the NHTSA’s report.
This was the third recall on Cybertruck this year, as one was highlighted in March for exterior trim panels detaching during operation. Another had to do with front parking lights being too bright, which was fixed with an Over-the-Air update last month.
News
Tesla is already expanding its Rental program aggressively
The program has already launched in a handful of locations, specifically, it has been confined to California for now. However, it does not seem like Tesla has any interest in keeping it restricted to the Golden State.
Tesla is looking to expand its Rental Program aggressively, just weeks after the program was first spotted on its Careers website.
Earlier this month, we reported on Tesla’s intention to launch a crazy new Rental program with cheap daily rates, which would give people in various locations the opportunity to borrow a vehicle in the company’s lineup with some outrageous perks.
Along with the cheap rates that start at about $60 per day, Tesla also provides free Full Self-Driving operation and free Supercharging for the duration of the rental. There are also no limits on mileage or charging, but the terms do not allow the renter to leave the state from which they are renting.
🚨🚨 If you look up details on the Tesla Rental program on Google, you’ll see a bunch of sites saying it’s because of decreasing demand 🤣 pic.twitter.com/WlSQrDJhMg
— TESLARATI (@Teslarati) November 10, 2025
The program has already launched in a handful of locations, specifically, it has been confined to California for now. However, it does not seem like Tesla has any interest in keeping it restricted to the Golden State.
Job postings from Tesla now show it is planning to launch the Rental program in at least three new states: Texas, Tennessee, and Massachusetts.
The jobs specifically are listed as a Rental Readiness Specialist, which lists the following job description:
“The Tesla Rental Program is looking for a Rental Readiness Specialist to work on one of the most progressive vehicle brands in the world. The Rental Readiness Specialist is a key contributor to the Tesla experience by coordinating the receipt of incoming new and used vehicle inventory. This position is responsible for fleet/lot management, movement of vehicles, vehicle readiness, rental invoicing, and customer hand-off. Candidates must have a high level of accountability, and personal satisfaction in doing a great job.”
It also says that those who take the position will have to charge and clean the cars, work with clients on scheduling pickups and drop-offs, and prepare the paperwork necessary to initiate the rental.
The establishment of a Rental program is big for Tesla because it not only gives people the opportunity to experience the vehicles, but it is also a new way to rent a car.
Just as the Tesla purchasing process is more streamlined and more efficient than the traditional car-buying experience, it seems this could be less painful and a new way to borrow a car for a trip instead of using your own.
Elon Musk
Elon Musk’s xAI gains first access to Saudi supercluster with 600k Nvidia GPUs
The facility will deploy roughly 600,000 Nvidia GPUs, making it one of the world’s most notable superclusters.
A Saudi-backed developer is moving forward with one of the world’s largest AI data centers, and Elon Musk’s xAI will be its first customer. The project, unveiled at the U.S.–Saudi Investment Forum in Washington, D.C., is being built by Humain, a company supported by Saudi Arabia’s Public Investment Fund.
The facility will deploy roughly 600,000 Nvidia GPUs, making it one of the world’s most notable superclusters.
xAI secures priority access
Nvidia CEO Jensen Huang stated that the planned data center marks a major leap not just for the region but for the global AI ecosystem as a whole. Huang joked about the sheer capacity of the build, emphasizing how unusual it is for a startup to receive infrastructure of such magnitude. The facility is designed to deliver 500 megawatts of Nvidia GPU power, placing it among the world’s largest AI-focused installations, as noted in a Benzinga report.
“We worked together to get this company started and off the ground and just got an incredible customer with Elon. Could you imagine a startup company, approximately $0 billion in revenues, now going to build a data center for Elon? 500 megawatts is gigantic. This company is off the charts right away,” Huang said.
Global Chipmakers Join Multi-Vendor Buildout To Enhance Compute Diversity
While Nvidia GPUs serve as the backbone of the first phase, Humain is preparing a diversified hardware stack. AMD will supply its Instinct MI450 accelerators, which could draw up to 1 gigawatt of power by 2030 as deployments ramp. Qualcomm will also contribute AI200 and AI250 data center processors, accounting for an additional 200 megawatts of compute capacity. Cisco will support the networking and infrastructure layer, helping knit the multi-chip architecture together.
Apart from confirming that xAI will be the upcoming supercluster’s first customer, Musk also joked about the rapid scaling needed to train increasingly large AI models. He joked that a theoretical expansion one thousand times larger of the upcoming supercluster “would be 8 bazillion, trillion dollars,” highlighting the playful exaggeration he often brings to discussions around extreme compute demand.