Thursday, March 12, 2015

Campaign is underway to stop killer robot arms race that could harm efforts to create “Friendly AI”


As anyone who's seen Terminator can tell you, weaponized artificial intelligence is no friend of the future of humanity.  Terminator, of course, is fiction, but the coming wave of “lethal autonomous weapons systems” (LAWS) is fact, and some close observers of the process of their emergence are sounding alarm bells and calling for their prohibition.

An informal “Meeting of Experts” on the subject of LAWS will take place at the United Nations Office at Geneva between April 13th and April 17th, 2015.  You can access their agenda here.

Heather Roff, a Visiting Professor at the Josef Korbel School of International Studies, and a research associate at the Eisenhower Center for Space and Defense Studies at the United States Air Force Academy, will be appearing (along with Stuart Russell, a member of the scientific advisory board at the Future of Life Institute) as invited experts there to make the case for banning weaponized artificial intelligence in the form of lethal autonomous weapons systems or "killer robots."

In an e-mail to Etopia News, Professor Roff said that:

From my perspective, AWS [autonomous weapons systems] have the potential to act as a catalyst towards developing stronger and stronger AI.  The worry, of course, is that this AI will be for lethal purposes, armed with munitions, and not created for beneficial purposes for humankind.  States may feel the need to engage in an AI arms race if they see any one state dominating the technological developments on AWS, thus hastening the development of an AI that is not created with the correct ends in view.”  

Professor Roff is a member of the International Committee for Robot Arms Control, an NGO that is an active supporter and member of the Steering Committee of the Campaign to Stop Killer Robots.  You can find a list of other NGOs involved with the Campaign to Stop Killer Robots here.  You can learn more about the Meeting of Experts in Geneva here. 

Also attending the Meeting of Experts on LAWS in Geneva will be Mark Gubrud, a physicist with an interest in robot arms control whose blog features a discussion of what exactly constitutes an "autonomous" lethal weapons system. .

A wide-ranging discussion of, and efforts to solve, “the control problem” for the “superintelligence” that could emerge from current research and development in artificial intelligence (AI) are already taking place at such institutions as the Machine Intelligence Research Institute (MIRI) and in such books as Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom, who is, incidentally, a Member of the Scientific Advisory Board of the Future of Life Institute.

Efforts to engineer a “controlled detonation” of the “intelligence explosion” expected from the development of AGI (artificial general intelligence) or “hard AI” are intended to prevent the instantiation or appearance of a ASI (artificial superintelligence) with malign effect on mankind.  An AI arms race would mean developing more and more powerful AIs of a type not necessarily aligned with more general and benevolent human interests.  Clearly, more attention needs to be paid to the issue of human control of both weapons systems and non-military applications of the increasingly powerful AI now available, before a system is created that is too ubiquitous and too powerful to control at all.

No comments: