Connect with us

News

Elon Musk left OpenAI due to conflict of interest with Tesla

Published

on

OpenAI, the nonprofit research firm co-founded by Elon Musk, announced that the serial tech entrepreneur is stepping down from the organization’s board of directors. According to an official announcement by the nonprofit, Elon’s departure is partly due to Tesla’s AI projects, which could result in a potential conflict of interest for the CEO. 

Musk’s departure from OpenAI’s board does not mean that he is relinquishing ties with the nonprofit, however. In a blog post about its new supporters, the research firm asserted that the Tesla CEO will be staying on as a benefactor and advisor for the organization.

“Elon Musk will depart the OpenAI Board but will continue to donate and advise the organization. As Tesla continues to become more focused on AI, this will eliminate a potential future conflict for Elon.”

As Tesla continues to evolve its Autopilot suite of features and aims to complete its first coast-to-coast fully autonomous drive this year, the Silicon Valley electric carmaker is said to be working on its own AI-based chips that will power the company’s future fleet of driverless cars. Musk revealed his efforts to produce a custom AI chip during a machine learning conference held last year, telling event attendees that Tesla is developing specialized AI hardware that will be the “best in the world.” According to The RegisterMusk told event attendees, “I wanted to make it clear that Tesla is serious about AI, both on the software and hardware fronts. We are developing custom AI hardware chips”.

Stepping down from OpenAI’s board seems to be a logical step for Musk as his focus on developing advanced artificial intelligence systems can be misconstrued by a non-profit that aims to be the watchdog for friendly AI development. Prior to the announcement of Elon Musk’s departure from OpenAI’s board, the nonprofit published a paper discussing the possible dangers of AI-based attacks. According to OpenAI’s study, it is now time for policymakers and individuals to be aware of ways that AI-based systems can be used maliciously, especially considering the ever-evolving artificial intelligence landscape.

Advertisement

To conduct the study, OpenAI collaborated with a number of researchers from other organizations, including the Future of Humanity Institute, the Centre for the Study of Existential Risk, the Center for a New American Security, and the Electronic Frontier Foundation.

Discussing the findings of their research, the authors of the study wrote that while investigations on the benefits of AI are widespread, studies on the dangers of advanced, intelligent machines are relatively few. As the field of artificial intelligence begins to expand and evolve, OpenAI’s researchers believe that threats associated with the technology would also start to grow and develop.

As noted in the study, artificial intelligence can expand existing threats, since the scalable use of AI technology can be utilized to lower the cost of attacks. With AI, even real-world attacks requiring human labor can be accomplished by machines that could think within and beyond their programming.

OpenAI’s new paper also discussed the emergence of new threats, which could rise through the use of systems that engage in tasks that are impractical for humans. The researchers also advised that the time might soon come when the AI-focused attacks can be finely targeted and challenging to attribute. With these in mind, the OpenAI researchers, together with co-authors of the study, recommended a series of contingencies that policymakers, as well as those involved in the research field, can implement to prevent and address scenarios when intelligent systems can be used maliciously.

Advertisement

RELATED: China is building a massive campus for AI development

According to the recently published OpenAI paper, the time is right for policymakers to collaborate with technical researchers to investigate, prevent, and mitigate potential malicious uses of artificial intelligence. OpenAI also advised engineers and researchers to acknowledge the dual-use nature of their work, allowing misuse-related considerations to be part of their research priorities. Furthermore, the nonprofit called for more mature methods when addressing AI’s dual-use, especially among stakeholders and domain experts involved in the field.

In conclusion, the OpenAI researchers and their peers admitted that while uncertainties remain in the AI industry, it is almost certain that artificial intelligence will play a huge role in the landscape of the future. With this in mind, a three-pronged approach — consisting of digital security, physical security, and political security — would be a great way to prepare for the upcoming use and possible misuse of artificial intelligence.

Co-founded by Tesla and SpaceX CEO Elon Musk back in 2015, OpenAI is a nonprofit research firm that aims to create and distribute safe artificial general intelligence (AGI) systems. As we noted in a previous report, OpenAI seems to be giving clues that it is ramping up its activity this year, as shown in a recent job posting for a Recruiting Coordinator who will be tasked to train and onboard the company’s new employees.

Advertisement

Simon is an experienced automotive reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday. For stories or tips--or even to just say a simple hello--send a message to his email, simon@teslarati.com or his handle on X, @ResidentSponge.

Advertisement
Comments

Elon Musk

Brazil Supreme Court orders Elon Musk and X investigation closed

The decision was issued by Supreme Court Justice Alexandre de Moraes following a recommendation from Brazil’s Prosecutor-General Paulo Gonet.

Published

on

Gage Skidmore, CC BY-SA 4.0 , via Wikimedia Commons

Brazil’s Supreme Federal Court has ordered the closure of an investigation involving Elon Musk and social media platform X. The inquiry had been pending for about two years and examined whether the platform was used to coordinate attacks against members of the judiciary.

The decision was issued by Supreme Court Justice Alexandre de Moraes following a recommendation from Brazil’s Prosecutor-General Paulo Gonet.

According to a report from Agencia Brasil, the investigation conducted by the Federal Police did not find evidence that X deliberately attempted to attack the judiciary or circumvent court orders.

Prosecutor-General Paulo Gonet concluded that the irregularities identified during the probe did not indicate fraudulent intent.

Advertisement

Justice Moraes accepted the prosecutor’s recommendation and ruled that the investigation should be closed. Under the ruling, the case will remain closed unless new evidence emerges.

The inquiry stemmed from concerns that content on X may have enabled online attacks against Supreme Court justices or violated rulings requiring the suspension of certain accounts under investigation.

Justice Moraes had previously taken several enforcement actions related to the platform during the broader dispute involving social media regulation in Brazil.

These included ordering a nationwide block of the platform, freezing Starlink accounts, and imposing fines on X totaling about $5.2 million. Authorities also froze financial assets linked to X and SpaceX through Starlink to collect unpaid penalties and seized roughly $3.3 million from the companies’ accounts.

Advertisement

Moraes also imposed daily fines of up to R$5 million, about $920,000, for alleged evasion of the X ban and established penalties of R$50,000 per day for VPN users who attempted to bypass the restriction.

Brazil remains an important market for X, with roughly 17 million users, making it one of the platform’s larger user bases globally.

The country is also a major market for Starlink, SpaceX’s satellite internet service, which has surpassed one million subscribers in Brazil.

Advertisement
Continue Reading

Elon Musk

FCC chair criticizes Amazon over opposition to SpaceX satellite plan

Carr made the remarks in a post on social media platform X.

Published

on

Credit: @SecWar/X

U.S. Federal Communications Commission (FCC) Chairman Brendan Carr criticized Amazon after the company opposed SpaceX’s proposal to launch a large satellite constellation that could function as an orbital data center network.

Carr made the remarks in a post on social media platform X.

Amazon recently urged the FCC to reject SpaceX’s application to deploy a constellation of up to 1 million low Earth orbit satellites that could serve as artificial intelligence data centers in space.

The company described the proposal as a “lofty ambition rather than a real plan,” arguing that SpaceX had not provided sufficient details about how the system would operate.

Advertisement

Carr responded by pointing to Amazon’s own satellite deployment progress.

“Amazon should focus on the fact that it will fall roughly 1,000 satellites short of meeting its upcoming deployment milestone, rather than spending their time and resources filing petitions against companies that are putting thousands of satellites in orbit,” Carr wrote on X.

Amazon has declined to comment on the statement.

Amazon has been working to deploy its Project Kuiper satellite network, which is intended to compete with SpaceX’s Starlink service. The company has invested more than $10 billion in the program and has launched more than 200 satellites since April of last year.

Advertisement

Amazon has also asked the FCC for a 24-month extension, until July 2028, to meet a requirement to deploy roughly 1,600 satellites by July 2026, as noted in a CNBC report.

SpaceX’s Starlink network currently has nearly 10,000 satellites in orbit and serves roughly 10 million customers. The FCC has also authorized SpaceX to deploy 7,500 additional satellites as the company continues expanding its global satellite internet network.

Continue Reading

Energy

Tesla Energy gains UK license to sell electricity to homes and businesses

The license was granted to Tesla Energy Ventures Ltd. by UK energy regulator Ofgem after a seven-month review process.

Published

on

Credit: Tesla Energy/X

Tesla Energy has received a license to supply electricity in the United Kingdom, opening the door for the company to serve homes and businesses in the country.

The license was granted to Tesla Energy Ventures Ltd. by UK energy regulator Ofgem after a seven-month review process.

According to Ofgem, the license took effect at 6 p.m. local time on Wednesday and applies to Great Britain.

The approval allows Tesla’s energy business to sell electricity directly to customers in the region, as noted in a Bloomberg News report.

Advertisement

Tesla has already expanded similar services in the United States. In Texas, the company offers electricity plans that allow Tesla owners to charge their vehicles at a lower cost while also feeding excess electricity back into the grid.

Tesla already has a sizable presence in the UK market. According to price comparison website U-switch, there are more than 250,000 Tesla electric vehicles in the country and thousands of Tesla home energy storage systems.

Ofgem also noted that Tesla Motors Ltd., a separate entity incorporated in England and Wales, received an electricity generation license in June 2020.

The new UK license arrives as Tesla continues expanding its global energy business.

Advertisement

Last year, Tesla Energy retained the top position in the global battery energy storage system (BESS) integrator market for the second consecutive year. According to Wood Mackenzie’s latest rankings, Tesla held about 15% of global market share in 2024.

The company also maintained a dominant position in North America, where it captured roughly 39% market share in the region.

At the same time, competition in the energy storage sector is increasing. Chinese companies such as Sungrow have been expanding their presence globally, particularly in Europe.

Advertisement
Continue Reading