Will AI violate human rights? Humanitarian groups are trying to make sure they don’t

A group of human rights organizations has signed the Toronto Declaration on Machine Learning, an initiative that calls for regulations designed to protect people from human rights violations caused by artificial intelligence. The declaration was signed on Wednesday, with groups such as Amnesty International, Access Now, Human Rights Watch, and the Wikimedia Foundation pledging their support.

The Toronto Declaration is rather unique in the way that it draws from international human rights laws. According to the declaration, it is imperative for people who are discriminated against by AI-based systems to have an avenue where they can seek reparations, considering that intelligent machines would likely “learn” implicit biases based on the information that they are fed. As could be seen in the declaration’s Preamble, the emergence of new technologies lies the need to develop new ways to protect human rights, particularly among diverse individuals and marginalized groups. The declaration further noted that AI-based technologies could “exacerbate discrimination at scale.”

“Existing patterns of structural discrimination may be reproduced and aggravated in situations that are particular to these technologies – for example, machine learning system goals that create self-fulfilling markers of success and reinforce patterns of inequality, or issues arising from using non-representative or “biased” datasets.

“All actors, public and private, must prevent and mitigate discrimination risks in the design, development and, application of machine learning technologies and that ensure that effective remedies are in place before deployment and throughout the lifecycle of these systems.”

Apart from the rights to equality and non-discrimination, the Toronto Declaration also highlights the importance of developing safeguards against possible AI-driven human rights violations in areas such as privacy, data protection, freedom of expression, participation in cultural life, equality before the law, and meaningful access to remedy. The declaration also notes that intelligent computer systems that make decisions and process data can implicate economic, social, and cultural rights, such as the provision of healthcare and education, as well as access to labor and employment.

In order to prevent human rights violations caused by artificial intelligence, the Toronto Declaration has called on developers to foster inclusion, diversity, and equity to ensure that AI-based systems do not develop discriminatory behavior.

“Intentional and inadvertent discriminatory inputs throughout the design, development and, use of machine learning systems create serious risks for human rights; systems are for the most part developed, applied and reviewed by actors which are largely based in particular countries and regions, with limited input from diverse groups in terms of race, culture, gender, and socio-economic backgrounds. This can produce discriminatory results.

“Inclusion, diversity, and equity entails the active participation of, and meaningful consultation with, a diverse community to ensure that machine learning systems are designed and used in ways that respect non-discrimination, equality, and other human rights.”

The full text of the Toronto Declaration on Machine Learning can be accessed here.

The inherent risks of hyper-intelligent machines are one of the key reasons behind the creation of OpenAI; a nonprofit organization co-founded by Elon Musk aimed at developing artificial intelligence that is inherently safe for people. While Musk has since stepped down from his post as a board member of OpenAI, the organization has shown signs that it is in the process of expanding. Earlier this year, for one, OpenAI announced that it is actively hiring a Recruiting Coordinator, who will be tasked to help grow the company’s team.

Will AI violate human rights? Humanitarian groups are trying to make sure they don’t
To Top