Connect with us

News

Elon Musk left OpenAI due to conflict of interest with Tesla

Published

on

OpenAI, the nonprofit research firm co-founded by Elon Musk, announced that the serial tech entrepreneur is stepping down from the organization’s board of directors. According to an official announcement by the nonprofit, Elon’s departure is partly due to Tesla’s AI projects, which could result in a potential conflict of interest for the CEO. 

Musk’s departure from OpenAI’s board does not mean that he is relinquishing ties with the nonprofit, however. In a blog post about its new supporters, the research firm asserted that the Tesla CEO will be staying on as a benefactor and advisor for the organization.

“Elon Musk will depart the OpenAI Board but will continue to donate and advise the organization. As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon.”

As Tesla continues to evolve its Autopilot suite of features and aims to complete its first coast-to-coast fully autonomous drive this year, the Silicon Valley electric carmaker is said to be working on its own AI-based chips that will power the company’s future fleet of driverless cars. Musk revealed his efforts to produce a custom AI chip during a machine learning conference held last year, telling event attendees that Tesla is developing specialized AI hardware that will be the “best in the world.” According to The RegisterMusk told event attendees, “I wanted to make it clear that Tesla is serious about AI, both on the software and hardware fronts. We are developing custom AI hardware chips”.

Stepping down from OpenAI’s board seems to be a logical step for Musk as his focus on developing advanced artificial intelligence systems can be misconstrued by a non-profit that aims to be the watchdog for friendly AI development. Prior to the announcement of Elon Musk’s departure from OpenAI’s board, the nonprofit published a paper discussing the possible dangers of AI-based attacks. According to OpenAI’s study, it is now time for policymakers and individuals to be aware of ways that AI-based systems can be used maliciously, especially considering the ever-evolving artificial intelligence landscape.

Advertisement

To conduct the study, OpenAI collaborated with a number of researchers from other organizations, including the Future of Humanity Institute, the Centre for the Study of Existential Risk, the Center for a New American Security, and the Electronic Frontier Foundation.

Discussing the findings of their research, the authors of the study wrote that while investigations on the benefits of AI are widespread, studies on the dangers of advanced, intelligent machines are relatively few. As the field of artificial intelligence begins to expand and evolve, OpenAI’s researchers believe that threats associated with the technology would also start to grow and develop.

As noted in the study, artificial intelligence can expand existing threats, since the scalable use of AI technology can be utilized to lower the cost of attacks. With AI, even real-world attacks requiring human labor can be accomplished by machines that could think within and beyond their programming.

OpenAI’s new paper also discussed the emergence of new threats, which could rise through the use of systems that engage in tasks that are impractical for humans. The researchers also advised that the time might soon come when the AI-focused attacks can be finely targeted and challenging to attribute. With these in mind, the OpenAI researchers, together with co-authors of the study, recommended a series of contingencies that policymakers, as well as those involved in the research field, can implement to prevent and address scenarios when intelligent systems can be used maliciously.

Advertisement

RELATED: China is building a massive campus for AI development

According to the recently published OpenAI paper, the time is right for policymakers to collaborate with technical researchers to investigate, prevent, and mitigate potential malicious uses of artificial intelligence. OpenAI also advised engineers and researchers to acknowledge the dual-use nature of their work, allowing misuse-related considerations to be part of their research priorities. Furthermore, the nonprofit called for more mature methods when addressing AI’s dual-use, especially among stakeholders and domain experts involved in the field.

In conclusion, the OpenAI researchers and their peers admitted that while uncertainties remain in the AI industry, it is almost certain that artificial intelligence will play a huge role in the landscape of the future. With this in mind, a three-pronged approach — consisting of digital security, physical security, and political security — would be a great way to prepare for the upcoming use and possible misuse of artificial intelligence.

Co-founded by Tesla and SpaceX CEO Elon Musk back in 2015, OpenAI is a nonprofit research firm that aims to create and distribute safe artificial general intelligence (AGI) systems. As we noted in a previous report, OpenAI seems to be giving clues that it is ramping up its activity this year, as shown in a recent job posting for a Recruiting Coordinator who will be tasked to train and onboard the company’s new employees.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Musk forces Judge’s exit from shareholder battles over viral social media slip-up

McCormick insisted in a court filing that she harbors no actual bias against Musk or the defendants. She claimed she either never clicked the “support” button, LinkedIn’s version of a “like,” or did so accidentally.

Published

on

(Credit: Tesla)

Many Tesla fans are familiar with the name Kathaleen McCormick, especially if they are investors in the company.

McCormick is a Delaware Chancery Court Judge who presided over Tesla CEO Elon Musk’s pay package lawsuit over the past few years, as well as his purchase of Twitter. However, she will no longer be sitting in on any issues related to Musk.

Elon Musk demands Delaware Judge recuse herself after ‘support’ post celebrating $2B court loss

In a rare admission of potential optics issues in one of America’s most powerful corporate courts, Delaware Chancery Court Chancellor Kathaleen McCormick stepped aside Monday from a cluster of shareholder lawsuits targeting Elon Musk and Tesla’s board.

Advertisement

The move came just days after Musk’s legal team highlighted her apparent “support” on LinkedIn for a post that mocked the billionaire over his 2022 tweets about the $44 billion Twitter acquisition.

McCormick insisted in a court filing that she harbors no actual bias against Musk or the defendants. She claimed she either never clicked the “support” button, LinkedIn’s version of a “like,” or did so accidentally.

She wrote in a newly published memo from the Delaware Chancery Court:

“The motion for recusal rests on a false premise — that I support a LinkedIn post about Mr. Musk, which I do not in fact support. I am not biased against the defendants in these actions.”

Advertisement

Yet she granted the reassignment anyway, acknowledging that the intense media scrutiny surrounding her involvement had become “detrimental to the administration of justice.”

The consolidated cases will now be handled by three of her colleagues on the Delaware Court of Chancery, the nation’s go-to venue for high-stakes corporate disputes. The lawsuits accuse Musk and Tesla directors of breaching fiduciary duties through lavish executive compensation and lax governance oversight.

One prominent claim, filed by a Detroit pension fund, challenges massive stock awards granted to board members, alleging the payouts harmed the company. The litigation also overlaps with issues stemming from Musk’s turbulent 2022 Twitter purchase.

McCormick’s history with Musk made her a lightning rod. In 2022, she presided over the fast-tracked lawsuit that ultimately forced Musk to complete the Twitter deal after he tried to back out.

Advertisement

Then in 2024, she struck down his record $56 billion Tesla compensation package, ruling the approval process was flawed and overly CEO-friendly. The Delaware Supreme Court later reinstated the pay on technical grounds, but the ruling fueled Musk’s long-standing criticism of the state’s judiciary.

Musk has repeatedly urged companies to reincorporate elsewhere, arguing Delaware courts have grown hostile to visionary leaders. Monday’s recusal hands him a symbolic victory and underscores how personal social-media activity can collide with judicial impartiality standards.

Delaware law requires judges to step aside if there’s even a “reasonable basis” to question their neutrality.

Court watchers say the episode highlights growing tensions in corporate America’s legal epicenter. While McCormick maintained her impartiality, the appearance of bias proved too costly to ignore. The cases will proceed without her, but the broader debate over Delaware’s dominance in business litigation is far from over.

Advertisement
Continue Reading

Elon Musk

Elon Musk has generous TSA offer denied by the White House: here’s why

Musk stepped in on March 21 via a post on X, writing: “I would like to offer to pay the salaries of TSA personnel during this funding impasse that is negatively affecting the lives of so many Americans at airports throughout the country.”

Published

on

Gage Skidmore, CC BY-SA 4.0 , via Wikimedia Commons

Tesla and SpaceX CEO Elon Musk made a generous offer to pay the salaries of Transportation Security Administration (TSA) employees last week, but the offer was denied by the White House.

In a striking display of private-sector initiative clashing with federal bureaucracy, the White House has turned down an offer from Elon Musk to personally cover the salaries of TSA officers amid an ongoing partial government shutdown. The rejection, reported last Wednesday by multiple outlets, highlights the legal and political hurdles facing unconventional solutions to Washington’s funding gridlock.

The impasse began weeks ago when Congress failed to pass funding for the Department of Homeland Security (DHS), leaving TSA employees, essential workers who screen millions of travelers daily, without paychecks while still required to report for duty.

Frustrated travelers have endured record-long security lines at major airports, with reports of chaos and delays rippling across the country.

Advertisement

Musk stepped in on March 21 via a post on X, writing: “I would like to offer to pay the salaries of TSA personnel during this funding impasse that is negatively affecting the lives of so many Americans at airports throughout the country.”

But it was not for no reason.

Advertisement

White House spokesperson Abigail Jackson responded on behalf of the Trump administration, expressing appreciation for Musk’s gesture.

However, the legal obstacles, which would be insurmountable, would inhibit Musk from doing so. Jackson said:

“We greatly appreciate Elon’s generous offer. This would pose great legal challenges due to his involvement with federal government contracts.”

Musk’s companies hold significant federal contracts, including NASA launches through SpaceX and potential Defense Department work, raising concerns about conflicts of interest, ethics rules, and anti-bribery statutes that prohibit private payments to government employees. Administration officials also indicated they expect the shutdown to end soon, making external funding unnecessary.

Advertisement

The episode underscores deeper tensions in Washington. Musk, who has advised on government efficiency efforts and maintains a close relationship with President Trump, has frequently criticized wasteful spending and bureaucratic delays.

His offer came as airport security lines ballooned, drawing public frustration toward both parties. TSA officers, many of whom rely on paychecks to cover mortgages and family expenses, have continued working without compensation, a situation that has drawn bipartisan concern but little immediate resolution.

Critics of the rejection argue it prioritizes red tape over practical relief for frontline workers and travelers. Supporters of the White House position counter that allowing private funding sets a dangerous precedent and could undermine congressional authority over the budget.

The White House eventually came to terms with the TSA on Friday and started paying them once again, and lines at airports instantly shrank.  The Department of Homeland Security (DHS) said that TSA staf would begin receiving paychecks “as early as” today.

Advertisement
Continue Reading

Elon Musk

Tesla FSD mocks BMW human driver: Saves pedestrian from near miss

Tesla FSD anticipated a BMW driver’s lane drift before the human behind the wheel could react.

Published

on

By

A video posted to r/TeslaFSD this week put a sharp spotlight on Tesla’s Full Self-Driving (FSD) software being able to react to pedestrian intent than an actual human driver behind the wheel. In the Reddit clip, a BMW driver can be seen rolling through a neighborhood street completely unaware of a pedestrian stepping in to cross. At the same time, a Tesla  driving on FSD had already begun slowing down before the pedestrian even began their attempt to cross the street The BMW kept moving, prompting the pedestrian to hop back, while the Tesla came to a stop and provide right-of-way for the human to safely cross.

That gap between what the BMW driver saw and what FSD had already processed is the story. Tesla FSD wasn’t reacting to a person in the street, rather it was reading the signals that a person was about to enter it based on the pedestrian’s movement, trajectory, and their trajectory to telegraph intent.

Tesla’s FSD is now built on an end-to-end neural network trained on billions of real-world miles, learning to interpret subtle human behavioral cues the same way an experienced human driver does instinctively. The difference is consistency. A human driver distracted for two seconds misses what FSD does not.

Tesla sues California DMV over Autopilot and FSD advertising ruling

Advertisement

Reddit commenters in the thread were blunt about the BMW driver’s failure, with several pointing out that the pedestrian was visible well before the crossing. One response put it plainly that the car on FSD saw the situation developing before the human in the other car had registered there was a situation at all.

Tesla has published data showing FSD (Supervised) is 54% safer than a human driver, accumulated across billions of miles driven on the system. Elon Musk has said FSD v14 will outperform human drivers by a factor of two to three, and that v15 has “a shot” at a 10x improvement. Pedestrian safety is where the stakes are highest, and where intent prediction closes the gap fastest. At 30 mph, a car covers roughly 44 feet per second. An extra second of awareness from reading a person’s body language rather than waiting for them to step out is often the difference between a near miss and a fatality.

Video and community discussion: r/TeslaFSD on Reddit

FSD saves man from becoming a pancake. BMW driver nearly flattens him.
by
u/Qwertygolol in
TeslaFSD

Advertisement
Continue Reading