News
Will AI violate human rights? Humanitarian groups are trying to make sure they don’t
A group of human rights organizations has signed the Toronto Declaration on Machine Learning, an initiative that calls for regulations designed to protect people from human rights violations caused by artificial intelligence. The declaration was signed on Wednesday, with groups such as Amnesty International, Access Now, Human Rights Watch, and the Wikimedia Foundation pledging their support.
The Toronto Declaration is rather unique in the way that it draws from international human rights laws. According to the declaration, it is imperative for people who are discriminated against by AI-based systems to have an avenue where they can seek reparations, considering that intelligent machines would likely “learn” implicit biases based on the information that they are fed. As could be seen in the declaration’s Preamble, the emergence of new technologies lies the need to develop new ways to protect human rights, particularly among diverse individuals and marginalized groups. The declaration further noted that AI-based technologies could “exacerbate discrimination at scale.”
“Existing patterns of structural discrimination may be reproduced and aggravated in situations that are particular to these technologies – for example, machine learning system goals that create self-fulfilling markers of success and reinforce patterns of inequality, or issues arising from using non-representative or “biased” datasets.
“All actors, public and private, must prevent and mitigate discrimination risks in the design, development and, application of machine learning technologies and that ensure that effective remedies are in place before deployment and throughout the lifecycle of these systems.”
Apart from the rights to equality and non-discrimination, the Toronto Declaration also highlights the importance of developing safeguards against possible AI-driven human rights violations in areas such as privacy, data protection, freedom of expression, participation in cultural life, equality before the law, and meaningful access to remedy. The declaration also notes that intelligent computer systems that make decisions and process data can implicate economic, social, and cultural rights, such as the provision of healthcare and education, as well as access to labor and employment.
In order to prevent human rights violations caused by artificial intelligence, the Toronto Declaration has called on developers to foster inclusion, diversity, and equity to ensure that AI-based systems do not develop discriminatory behavior.
“Intentional and inadvertent discriminatory inputs throughout the design, development and, use of machine learning systems create serious risks for human rights; systems are for the most part developed, applied and reviewed by actors which are largely based in particular countries and regions, with limited input from diverse groups in terms of race, culture, gender, and socio-economic backgrounds. This can produce discriminatory results.
“Inclusion, diversity, and equity entails the active participation of, and meaningful consultation with, a diverse community to ensure that machine learning systems are designed and used in ways that respect non-discrimination, equality, and other human rights.”
The full text of the Toronto Declaration on Machine Learning can be accessed here.
The inherent risks of hyper-intelligent machines are one of the key reasons behind the creation of OpenAI; a nonprofit organization co-founded by Elon Musk aimed at developing artificial intelligence that is inherently safe for people. While Musk has since stepped down from his post as a board member of OpenAI, the organization has shown signs that it is in the process of expanding. Earlier this year, for one, OpenAI announced that it is actively hiring a Recruiting Coordinator, who will be tasked to help grow the company’s team.
Elon Musk
Brazil Supreme Court orders Elon Musk and X investigation closed
The decision was issued by Supreme Court Justice Alexandre de Moraes following a recommendation from Brazil’s Prosecutor-General Paulo Gonet.
Brazil’s Supreme Federal Court has ordered the closure of an investigation involving Elon Musk and social media platform X. The inquiry had been pending for about two years and examined whether the platform was used to coordinate attacks against members of the judiciary.
The decision was issued by Supreme Court Justice Alexandre de Moraes following a recommendation from Brazil’s Prosecutor-General Paulo Gonet.
According to a report from Agencia Brasil, the investigation conducted by the Federal Police did not find evidence that X deliberately attempted to attack the judiciary or circumvent court orders.
Prosecutor-General Paulo Gonet concluded that the irregularities identified during the probe did not indicate fraudulent intent.
Justice Moraes accepted the prosecutor’s recommendation and ruled that the investigation should be closed. Under the ruling, the case will remain closed unless new evidence emerges.
The inquiry stemmed from concerns that content on X may have enabled online attacks against Supreme Court justices or violated rulings requiring the suspension of certain accounts under investigation.
Justice Moraes had previously taken several enforcement actions related to the platform during the broader dispute involving social media regulation in Brazil.
These included ordering a nationwide block of the platform, freezing Starlink accounts, and imposing fines on X totaling about $5.2 million. Authorities also froze financial assets linked to X and SpaceX through Starlink to collect unpaid penalties and seized roughly $3.3 million from the companies’ accounts.
Moraes also imposed daily fines of up to R$5 million, about $920,000, for alleged evasion of the X ban and established penalties of R$50,000 per day for VPN users who attempted to bypass the restriction.
Brazil remains an important market for X, with roughly 17 million users, making it one of the platform’s larger user bases globally.
The country is also a major market for Starlink, SpaceX’s satellite internet service, which has surpassed one million subscribers in Brazil.
Elon Musk
FCC chair criticizes Amazon over opposition to SpaceX satellite plan
Carr made the remarks in a post on social media platform X.
U.S. Federal Communications Commission (FCC) Chairman Brendan Carr criticized Amazon after the company opposed SpaceX’s proposal to launch a large satellite constellation that could function as an orbital data center network.
Carr made the remarks in a post on social media platform X.
Amazon recently urged the FCC to reject SpaceX’s application to deploy a constellation of up to 1 million low Earth orbit satellites that could serve as artificial intelligence data centers in space.
The company described the proposal as a “lofty ambition rather than a real plan,” arguing that SpaceX had not provided sufficient details about how the system would operate.
Carr responded by pointing to Amazon’s own satellite deployment progress.
“Amazon should focus on the fact that it will fall roughly 1,000 satellites short of meeting its upcoming deployment milestone, rather than spending their time and resources filing petitions against companies that are putting thousands of satellites in orbit,” Carr wrote on X.
Amazon has declined to comment on the statement.
Amazon has been working to deploy its Project Kuiper satellite network, which is intended to compete with SpaceX’s Starlink service. The company has invested more than $10 billion in the program and has launched more than 200 satellites since April of last year.
Amazon has also asked the FCC for a 24-month extension, until July 2028, to meet a requirement to deploy roughly 1,600 satellites by July 2026, as noted in a CNBC report.
SpaceX’s Starlink network currently has nearly 10,000 satellites in orbit and serves roughly 10 million customers. The FCC has also authorized SpaceX to deploy 7,500 additional satellites as the company continues expanding its global satellite internet network.
Energy
Tesla Energy gains UK license to sell electricity to homes and businesses
The license was granted to Tesla Energy Ventures Ltd. by UK energy regulator Ofgem after a seven-month review process.
Tesla Energy has received a license to supply electricity in the United Kingdom, opening the door for the company to serve homes and businesses in the country.
The license was granted to Tesla Energy Ventures Ltd. by UK energy regulator Ofgem after a seven-month review process.
According to Ofgem, the license took effect at 6 p.m. local time on Wednesday and applies to Great Britain.
The approval allows Tesla’s energy business to sell electricity directly to customers in the region, as noted in a Bloomberg News report.
Tesla has already expanded similar services in the United States. In Texas, the company offers electricity plans that allow Tesla owners to charge their vehicles at a lower cost while also feeding excess electricity back into the grid.
Tesla already has a sizable presence in the UK market. According to price comparison website U-switch, there are more than 250,000 Tesla electric vehicles in the country and thousands of Tesla home energy storage systems.
Ofgem also noted that Tesla Motors Ltd., a separate entity incorporated in England and Wales, received an electricity generation license in June 2020.
The new UK license arrives as Tesla continues expanding its global energy business.
Last year, Tesla Energy retained the top position in the global battery energy storage system (BESS) integrator market for the second consecutive year. According to Wood Mackenzie’s latest rankings, Tesla held about 15% of global market share in 2024.
The company also maintained a dominant position in North America, where it captured roughly 39% market share in the region.
At the same time, competition in the energy storage sector is increasing. Chinese companies such as Sungrow have been expanding their presence globally, particularly in Europe.