In a recent blog post, the leaders of OpenAI, the startup behind ChatGPT, called for the regulation of “superintelligent” artificial intelligence. The document, titled “Governance of Superintelligence,” was written by Greg Brockman, Ilya Sutskever, and Sam Altman.
As per the post’s authors, there is a need for an international regulator that has the ability to “inspect systems, require audits, test for compliance with safety standards, and place restrictions on degrees of deployment and levels of security,” among others. This organization, according to the post’s authors, should be the equivalent of the International Atomic Energy Agency (IAEA). The organization could reduce the existential risk that superintelligent systems could pose.
“Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations. In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive,” the OpenAI post read.
The authors of the OpenAI blog post further explained their stance in the following section:
“We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits). Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate. By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar,” the post read.
Tesla CEO Elon Musk posted a quick response to the AI startup’s post on Twitter, noting that “control matters.” This is quite unsurprising considering that Musk is among the people who penned a letter earlier this year calling for a pause in the training of AI systems that exceed GPT-4. In an open letter, Musk, together with a number of tech leaders, noted that the pause could be used to jointly develop and implement a set of shared safety protocols.
OpenAI’s blog post can be viewed here.