Serial tech entrepreneur Elon Musk and over 100 leaders in the field of artificial intelligence and robotics have come together, calling on the United Nations to take action against the use of autonomous weapons, or ‘killer robots’.
“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.” warns the group of tech leaders in an open letter to the UN.
This marks the first time world leaders in the field of robotics and AI have collectively issued a ‘warning letter’ to the UN in such great numbers. The open letter follows formal discussions originally scheduled to take place in December 2016 when 123 UN member nations called for a meeting on the topic of lethal autonomous weapons. However, the meeting was delayed by the UN to August 21 citing insufficient funding at the time.
116 tech leaders, including Tesla and SpaceX chief Elon Musk as well as Google Alphabet’s Mustafa Suleyman, released the letter titled ‘An Open Letter to the UnitedNations Convention on Certain Conventional Weapons’ on Sunday at the International Joint Conference on Artificial Intelligence in Melbourne. In the letter, the group of experts warn the UN that “we do not have long to act”.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Stuart Russell, founder and vice-president of Bayesian Logic said in a statement accompanying the letter, “Unless people want to see new weapons of mass destruction – in the form of vast swarms of lethal microdrones – spreading around the world, it’s imperative to step up and support the United Nations’ efforts to create a treaty banning lethal autonomous weapons. This is vital for national and international security,”
Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, added in a statement accompanying the letter “We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organizations for an UN ban on such weapons, similar to bans on chemical and other weapons,”
The coming together of world leaders in the field of artificial intelligence and robotics, and their issue of stern warning against the dangers of ‘killer robots’, validates Musk’s concerns about the impact on human life if AI is to left unregulated.
“I have exposure to the most cutting edge AI and I think people should be really concerned about it,” Musk once said in a speech at the National Governors Association meeting in Rhode Island. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.”
Musk’s OpenAI, a nonprofit research group that supports emerging artificial intelligence technology that is safe and responsible, aims to assist regulators with coming up with responsible means to control the development of AI and ensure AI does not become “a fundamental risk to the existence of human civilization”.
“By the time we are reactive, it’s too late,” Musk has continued to warn. “There’s a role for regulators that I think is very important and I’m against over regulation for sure, but I think we better get on that with AI.”