A new study from the RAND Corporation suggests that the adoption of AI-powered weapons in the military could result in an increased risk of nuclear war. According to the study, the utilization of smart technologies could undermine valuable military conventions such as “mutual assured destruction.”
Back in the Cold War, the condition of mutual assured destruction between the United States and the Soviet Union ended up maintaining the peace, since it was understood that a first-strike attack would result in massive damages on the aggressor. Due to mutual assured destruction, countries with advanced militaries found very little incentive to take violent actions that could trigger a full-scale war.
With AI weapons in consideration, however, some nations might adopt a first-strike stance during conflicts to counter the advantages brought by artificial intelligence-powered defense systems. Thus, undermining the strategic stability provided by mutual assured destruction.
While the risks of a nuclear war could increase with the emergence of AI weapons, however, the RAND study also states that smart technologies can be used as a means to preserve strategic stability, at least in the long run, as noted in a Science Daily report. Andrew Lohn, one of the authors of the RAND Corporation study, explained this in a statement.
“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes. There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult, and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk,” he said.
While the idea of using bleeding-edge tech for the military might seem like a frightening idea, the fields of national defense and artificial intelligence actually have a long history together. According to Edward Geist, another researcher from the RAND study, AI in itself started with military efforts in mind.
“The connection between nuclear war and artificial intelligence is not new; in fact, the two have an intertwined history. Much of the early development of AI was done in support of military efforts or with military objectives in mind,” he said.
In a lot of ways, Geist’s statements do ring true. Earlier this month alone, a senior Pentagon official, undersecretary of defense for research and engineering Michael D. Griffin, encouraged the United States to explore emerging tech fields such as AI to ensure the country’s safety in the years to come. According to Griffin, future skirmishes between rival nations could happen through cyber attacks and AI-driven threats. Hence, the US would be wise to pursue the development of AI now, since the technology is still in its infancy.
Outside the United States, China has already expressed its assertive stance on AI. Just recently, one of the country’s AI startups, SenseTime, a company which creates surveillance tech, reached a valuation of $4.5 billion after a funding round led by e-commerce giant Alibaba. In South Korea, KAIST University — a DARPA award-winning school — recently found itself on the receiving end of a boycott from the AI community, after it was found that a number of its researchers were helping a local arms manufacturer develop AI-powered weapons.
Here’s a look at some of the US’ advanced military combat robots.