News
Elon Musk left OpenAI due to conflict of interest with Tesla
OpenAI, the nonprofit research firm co-founded by Elon Musk, announced that the serial tech entrepreneur is stepping down from the organization’s board of directors. According to an official announcement by the nonprofit, Elon’s departure is partly due to Tesla’s AI projects, which could result in a potential conflict of interest for the CEO.
Musk’s departure from OpenAI’s board does not mean that he is relinquishing ties with the nonprofit, however. In a blog post about its new supporters, the research firm asserted that the Tesla CEO will be staying on as a benefactor and advisor for the organization.
“Elon Musk will depart the OpenAI Board but will continue to donate and advise the organization. As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon.”
As Tesla continues to evolve its Autopilot suite of features and aims to complete its first coast-to-coast fully autonomous drive this year, the Silicon Valley electric carmaker is said to be working on its own AI-based chips that will power the company’s future fleet of driverless cars. Musk revealed his efforts to produce a custom AI chip during a machine learning conference held last year, telling event attendees that Tesla is developing specialized AI hardware that will be the “best in the world.” According to The Register, Musk told event attendees, “I wanted to make it clear that Tesla is serious about AI, both on the software and hardware fronts. We are developing custom AI hardware chips”.
Stepping down from OpenAI’s board seems to be a logical step for Musk as his focus on developing advanced artificial intelligence systems can be misconstrued by a non-profit that aims to be the watchdog for friendly AI development. Prior to the announcement of Elon Musk’s departure from OpenAI’s board, the nonprofit published a paper discussing the possible dangers of AI-based attacks. According to OpenAI’s study, it is now time for policymakers and individuals to be aware of ways that AI-based systems can be used maliciously, especially considering the ever-evolving artificial intelligence landscape.
To conduct the study, OpenAI collaborated with a number of researchers from other organizations, including the Future of Humanity Institute, the Centre for the Study of Existential Risk, the Center for a New American Security, and the Electronic Frontier Foundation.
Discussing the findings of their research, the authors of the study wrote that while investigations on the benefits of AI are widespread, studies on the dangers of advanced, intelligent machines are relatively few. As the field of artificial intelligence begins to expand and evolve, OpenAI’s researchers believe that threats associated with the technology would also start to grow and develop.
As noted in the study, artificial intelligence can expand existing threats, since the scalable use of AI technology can be utilized to lower the cost of attacks. With AI, even real-world attacks requiring human labor can be accomplished by machines that could think within and beyond their programming.
OpenAI’s new paper also discussed the emergence of new threats, which could rise through the use of systems that engage in tasks that are impractical for humans. The researchers also advised that the time might soon come when the AI-focused attacks can be finely targeted and challenging to attribute. With these in mind, the OpenAI researchers, together with co-authors of the study, recommended a series of contingencies that policymakers, as well as those involved in the research field, can implement to prevent and address scenarios when intelligent systems can be used maliciously.
RELATED: China is building a massive campus for AI development
According to the recently published OpenAI paper, the time is right for policymakers to collaborate with technical researchers to investigate, prevent, and mitigate potential malicious uses of artificial intelligence. OpenAI also advised engineers and researchers to acknowledge the dual-use nature of their work, allowing misuse-related considerations to be part of their research priorities. Furthermore, the nonprofit called for more mature methods when addressing AI’s dual-use, especially among stakeholders and domain experts involved in the field.
In conclusion, the OpenAI researchers and their peers admitted that while uncertainties remain in the AI industry, it is almost certain that artificial intelligence will play a huge role in the landscape of the future. With this in mind, a three-pronged approach — consisting of digital security, physical security, and political security — would be a great way to prepare for the upcoming use and possible misuse of artificial intelligence.
Co-founded by Tesla and SpaceX CEO Elon Musk back in 2015, OpenAI is a nonprofit research firm that aims to create and distribute safe artificial general intelligence (AGI) systems. As we noted in a previous report, OpenAI seems to be giving clues that it is ramping up its activity this year, as shown in a recent job posting for a Recruiting Coordinator who will be tasked to train and onboard the company’s new employees.
News
Tesla starts rolling out FSD V14.2.1 to AI4 vehicles including Cybertruck
FSD V14.2.1 was released just about a week after the initial FSD V14.2 update was rolled out.
It appears that the Tesla AI team burned the midnight oil, allowing them to release FSD V14.2.1 on Thanksgiving. The update has been reported by Tesla owners with AI4 vehicles, as well as Cybertruck owners.
For the Tesla AI team, at least, it appears that work really does not stop.
FSD V14.2.1
Initial posts about FSD V14.2.1 were shared by Tesla owners on social media platform X. As per the Tesla owners, V14.2.1 appears to be a point update that’s designed to polish the features and capacities that have been available in FSD V14. A look at the release notes for FSD V14.2.1, however, shows that an extra line has been added.
“Camera visibility can lead to increased attention monitoring sensitivity.”
Whether this could lead to more drivers being alerted to pay attention to the roads more remains to be seen. This would likely become evident as soon as the first batch of videos from Tesla owners who received V14.21 start sharing their first drive impressions of the update. Despite the update being released on Thanksgiving, it would not be surprising if first impressions videos of FSD V14.2.1 are shared today, just the same.
Rapid FSD releases
What is rather interesting and impressive is the fact that FSD V14.2.1 was released just about a week after the initial FSD V14.2 update was rolled out. This bodes well for Tesla’s FSD users, especially since CEO Elon Musk has stated in the past that the V14.2 series will be for “widespread use.”
FSD V14 has so far received numerous positive reviews from Tesla owners, with numerous drivers noting that the system now drives better than most human drivers because it is cautious, confident, and considerate at the same time. The only question now, really, is if the V14.2 series does make it to the company’s wide FSD fleet, which is still populated by numerous HW3 vehicles.
News
Waymo rider data hints that Tesla’s Cybercab strategy might be the smartest, after all
These observations all but validate Tesla’s controversial two-seat Cybercab strategy, which has caught a lot of criticism since it was unveiled last year.
Toyota Connected Europe designer Karim Dia Toubajie has highlighted a particular trend that became evident in Waymo’s Q3 2025 occupancy stats. As it turned out, 90% of the trips taken by the driverless taxis carried two or fewer passengers.
These observations all but validate Tesla’s controversial two-seat Cybercab strategy, which has caught a lot of criticism since it was unveiled last year.
Toyota designer observes a trend
Karim Dia Toubajie, Lead Product Designer (Sustainable Mobility) at Toyota Connected Europe, analyzed Waymo’s latest California Public Utilities Commission filings and posted the results on LinkedIn this week.
“90% of robotaxi trips have 2 or less passengers, so why are we using 5-seater vehicles?” Toubajie asked. He continued: “90% of trips have 2 or less people, 75% of trips have 1 or less people.” He accompanied his comments with a graphic showing Waymo’s occupancy rates, which showed 71% of trips having one passenger, 15% of trips having two passengers, 6% of trips having three passengers, 5% of trips having zero passengers, and only 3% of trips having four passengers.
The data excludes operational trips like depot runs or charging, though Toubajie pointed out that most of the time, Waymo’s massive self-driving taxis are really just transporting 1 or 2 people, at times even no passengers at all. “This means that most of the time, the vehicle being used significantly outweighs the needs of the trip,” the Toyota designer wrote in his post.
Cybercab suddenly looks perfectly sized
Toubajie gave a nod to Tesla’s approach. “The Tesla Cybercab announced in 2024, is a 2-seater robotaxi with a 50kWh battery but I still believe this is on the larger side of what’s required for most trips,” he wrote.
With Waymo’s own numbers now proving 90% of demand fits two seats or fewer, the wheel-less, lidar-free Cybercab now looks like the smartest play in the room. The Cybercab is designed to be easy to produce, with CEO Elon Musk commenting that its product line would resemble a consumer electronics factory more than an automotive plant. This means that the Cybercab could saturate the roads quickly once it is deployed.
While the Cybercab will likely take the lion’s share of Tesla’s ride-hailing passengers, the Model 3 sedan and Model Y crossover would be perfect for the remaining 9% of riders who require larger vehicles. This should be easy to implement for Tesla, as the Model Y and Model 3 are both mass-market vehicles.
Elon Musk
Elon Musk and James Cameron find middle ground in space and AI despite political differences
Musk responded with some positive words for the director on X.
Avatar director James Cameron has stated that he can still agree with Elon Musk on space exploration and AI safety despite their stark political differences.
In an interview with Puck’s The Town podcast, the liberal director praised Musk’s SpaceX achievements and said higher priorities must unite them, such as space travel and artificial intelligence. Musk responded with some positive words for the director on X.
A longtime mutual respect
Cameron and Musk have bonded over technology for years. As far back as 2011, Cameron told NBC News that “Elon is making very strong strides. I think he’s the likeliest person to step into the shoes of the shuttle program and actually provide human access to low Earth orbit. So… go, Elon.” Cameron was right, as SpaceX would go on to become the dominant force in spaceflight over the years.
Even after Musk’s embrace of conservative politics and his roles as senior advisor and former DOGE head, Cameron refused to cancel his relationship with the CEO. “I can separate a person and their politics from the things that they want to accomplish if they’re aligned with what I think are good goals,” Cameron said. Musk appreciated the director’s comments, stating that “Jim understands physics, which is rare in Hollywood.”
Shared AI warnings
Both men have stated that artificial intelligence could be an existential threat to humanity, though Musk has noted that Tesla’s products such as Optimus could usher in an era of sustainable abundance. Musk recently predicted that money and jobs could become irrelevant with advancing AI, while Cameron warned of a deeper crisis, as noted in a Fox News report.
“Because the overall risk of AI in general… is that we lose purpose as people. We lose jobs. We lose a sense of, ‘Well, what are we here for?’” Cameron said. “We are these flawed biological machines, and a computer can be theoretically more precise, more correct, faster, all of those things. And that’s going to be a threshold existential issue.”
He concluded: “I just think it’s important for us as a human civilization to prioritize. We’ve got to make this Earth our spaceship. That’s really what we need to be thinking.”
