Welcome to a FREE preview of our weekly exclusive! Each week our team goes ‘Beyond the News’ and handcrafts a special edition that includes our thoughts on the biggest stories, why it matters, and how it could impact the future.
You can receive this newsletter along with all of our other members-exclusive newsletters, become a premium member for just $3/month. Your support goes a long way for us behind the scenes! Thank you.
—
In a recent podcast discussion Elon Musk had with AI expert Lex Fridman about artificial intelligence, consciousness, and Musk’s brain-computer interface company Neuralink, an interesting question arose about Tesla’s role as an educator in that realm. Referring specifically to the Smart Summon feature that’s part of the company’s Version 10 firmware, Fridman asked Musk whether he felt the burden of being an AI communicator by exposing people for the first time (on a large scale) to driverless cars.
To be honest, Musk’s response wasn’t really, well, responsive. He deferred to the more commercial-oriented goals of the company: “We’re just trying to make people’s lives easier with autonomy.” The long-term goals of Neuralink are pretty scary for mainstream humans, so to me, this question really deserves a long sit-and-think. After all, we’re talking computer self-awareness and capabilities well beyond what we’d consider superhuman and beyond the ability of humans to control after a certain point. Neuralink wants the type of AI connection implanted in our brains.
On one hand, the evolution of Autopilot with each iteration and the evolution of Smart Summon with each new release exposes people to the process of how humans teach computers and how computers teach themselves. In other words, it shows people that AI is somewhat similar to how people learn. However, I don’t know that it gives everyday people a full picture of what Musk is really talking about all the time regarding the pace of AI learning and how that leads to doom scenarios.
If anything, is Tesla lowering expectations for AI’s future? If a Tesla is the first “robot” people see, and then they see years of functionality that’s sub-par to an attentive human at the wheel before seeing the full promise of the Tesla Network, what picture is being painted? Then, what about the wake of uncertainty it will leave behind?
In the interview, Musk described our minds as essentially a monkey brain with a computer trying to make the monkey brain’s primitive urges happy all the time. Once we start letting computers take over what little functions the monkey brain enjoyed or needed to keep in check (driving, painting, laboring, etc.), how is the AI eventually going to decide to deal with what it will just see as…the monkeys? Right now, we’re seeing robot cars driving into curbs and highway dividers, making us feel pretty superior to them despite the fact that humans do this much more frequently. What happens if the car one day decides to do that on purpose because its calculations factor out that humans need to exist?
Okay, I know I’m getting a touch ridiculous here, but it just brings me back to Fridman’s original question about whether Tesla carries the burden of educating the public on these matters with their push for self-driving. Perhaps if they were just focused on moving the world to sustainable energy and production, their driver-assist features would be just as Musk describes them – a convenience or value-added feature. After all, most other self-driving companies and auto manufacturers working on self-driving just have the customer in mind, not so much a robot overlord future.
But that’s not the future Musk is working towards. He’s both warning us about the future of AI while actively developing our defense against it. Should his car company then play a big role in acclimating and teaching people about what AI will really be able to do beyond getting them to work and back? Hosting 3-4 hour long “Investor Day” presentations are part of this educational effort, I suppose, but 99% (or more) of the general public is not going to be interested or even able to understand what Tesla’s genius developers are talking about, much less understand how it might apply to their lives beyond their cars one day.
I don’t really know what Tesla’s teaching could or would or should look like, but it’s an interesting question given the acceleration the company is making in bringing AI into our lives on a scale much bigger than harvesting our data to sell us ads.