Skip to main content

Remarks to the Association for the Advancement of Artificial Intelligence On Banning Fully Autonomous Weapons

By Stephen Goose, Director, Human Rights Watch Arms Division

Remarks made during the AAAI-15, the 29th AAAI Conference on Artificial Intelligence in Austin, Texas

I thank AAAI and its members for the invitation to be here, and I very much appreciate that you have chosen this year to highlight the crucial issue of autonomous weapons, and in particular fully autonomous weapon systems, also known as lethal autonomous weapons systems, or more colloquially, killer robots. In my view, you could not have selected a more timely or more important development. Fully autonomous weapons would not just be a new type of armament, but rather a new method of warfare, one that would radically change how wars are fought, and NOT to the betterment of humankind. 

Some believe that it is both inevitable and desirable that armed forces will one day field fully autonomous weapon systems.  These would be weapons systems that, once initiated, would be able to select and engage targets without any further human intervention. There would no longer be a human operator deciding what to fire at, and when to shoot. The weapon system itself would make those determinations. 

While there is no doubt that greater autonomy can have military, and even humanitarian, advantages, it is the belief of Human Rights Watch and many others, that full autonomy is a step too far. Fully autonomous weapons would cross a fundamental moral and ethical line by ceding life and death decisions on the battlefield to machines. There are also serious questions about whether fully autonomous weapons would be capable of complying with the principles of international humanitarian law during combat, or international human rights law during law enforcement operations. There are also serious societal and proliferation concerns. In addition, many, including scientists and military leaders, have raised a host of technical and operational concerns.

We are convinced that these weapons would pose grave dangers to civilians – and to soldiers – in the future. Taken together, this multitude of concerns has led to the call for a preemptive prohibition on fully autonomous weapon systems – a new international treaty that would ban the development, production, and use of fully autonomous weapons, and require that there is always meaningful human control over targeting and kill decisions.

While there are at this stage still many doubters, my experience leads me to conclude that a preemptive ban is not only warranted, but that it is the only possible approach that would successfully address the potential dangers of fully autonomous weapons AND that a ban is achievable.  I believe there WILL be a legally binding prohibition, but we need the involvement and advice and expertise of the AI community both to get there and to ensure it is the most effective ban possible.  

A Rocketing Issue

Over the past two years, fully autonomous weapons have rocketed to the top ranks of concern in the field of disarmament and arms control, or what is now often called humanitarian disarmament. Let me run through some of the global developments since that time.

A Human Rights Watch report in November 2012 calling for a preemptive prohibition helped spur the first widespread public debate on the issue.

In April 2013, an international coalition of nongovernmental organizations (NGOs) launched the Campaign to Stop Killer Robots, calling for a preemptive ban. The Campaign, coordinated by Human Rights Watch, now consists of about 50 NGOs in about two dozen countries. It is modeled on the successful campaigns that led to international bans on antipersonnel landmines, cluster munitions, and blinding lasers. Strong civil society involvement has been cited as the crucial factor in the achievement of those three treaties.

Since the Campaign launch, the UN special rapporteur on extrajudicial killings issued a report that echoed many of the Campaign’s concerns, and called on governments to adopt national moratoria on the weapons pending the outcome of international discussions.

The European Parliament passed a resolution that calls for a ban. About two dozen Nobel Peace Laureates issued a joint statement in favor of a ban. More than 70 prominent faith leaders from around the world released a statement calling for a ban.

The Secretary-General of the United Nations and the head of the UN Office of Disarmament Affairs, as well as the International Committee of the Red Cross, have expressed deep concerns about the development of fully autonomous weapons.

The University of Massachusetts conducted a poll in the US, and found that of those aware of the issue, two-thirds opposed fully autonomous weapons, with the strongest opposition coming from those who identified themselves as military (either active duty, retired, or family).

A Canadian robotics company, Clearpath, became the first to support the ban and declare that it would not work toward the development of fully autonomous weapons systems.

And, hopefully of great interest to this audience, more than 270 prominent scientists have signed a statement calling for a ban. This was organized by the International Committee for Robot Arms Control, which was founded in 2009 by roboticists, ethicists, and others. I hope that each of you will take a look at that statement, and consider adding your name to it. It of course goes farther than the recent open letter signed by hundreds of AI professionals, but is certainly consistent with that letter’s call that AI research should be “beneficial” to humanity.

So we already see widespread support for a prohibition from many different communities and institutions.

Governments have become seized with the issue, though few have yet articulated formal positions, much less publicly called for a ban. As of late 2012, virtually no government had made a public statement about fully autonomous weapons. Now, more than fifty nations have made statements, with all agreeing that it is an issue that must be addressed.

Most importantly, the more than 100 States Parties to the Convention on Conventional Weapons (CCW) agreed in November 2013 to take up the issue in 2014.  In the diplomatic world, that was moving at lightning speed. The first CCW talks occurred in May 2014, with more in November when states agreed to continue discussions this year. During the sessions, many have expressed grave reservations about what in the CCW are called lethal autonomous weapons systems, and none have explicitly acknowledged pursuit of such systems. The concept of a need for meaningful human control has emerged as a central theme.

Still, the technology has been advancing rapidly, and diplomacy has a lot of catching up to do.

What Are We Talking About?

Sometimes people ask, just what are we talking about? There are no agreed upon definitions for  fully autonomous weapons, or for things like the concept of meaningful human control. It is not possible or desirable at this point to be able to state definitively what must be banned and what should not.  Those are the things that are worked out over years of experts meetings and diplomatic negotiations. For international treaties, definitions and potential exclusions are the last things to be agreed.

We have stressed that we are concerned with potential future weapons systems, not those that exist today, but that we need to examine today’s systems that already exercise a great deal of autonomy to determine if and how they maintain meaningful human control and provide adequate safeguards for civilian populations.

It is important to emphasize that the Campaign to Stop Killer Robots is not opposed to advances in artificial intelligence or military robotics, or even the advance of autonomy and AI in weapons systems, as both military and humanitarian advantages could be achieved if pursued and implemented properly. The Campaign’s call for a ban on development of fully autonomous weapons is not intended to impede broader research into military robotics or weapons autonomy or full autonomy in the civilian sphere.

It is our view, however, that research and development activities should be banned if they are directed at technology that can only be used for fully autonomous weapons or that is explicitly intended for use in such weapons.

Let’s look in turn at key objections to fully autonomous weapons.

Moral, Ethical, and Societal Objections

Perhaps the most powerful objection to fully autonomous weapon systems is moral and ethical in nature. Simply put, many feel that it is morally wrong to give machines the power to decide who lives and who dies on the battlefield, or for that matter in law enforcement operations.

Giving such responsibilities to machines in such circumstances has been called the ultimate attack on human dignity. The notion of allowing compassionless robots to make decisions about the application of violent force is repugnant to many. Compassion is a key check on the killing of other human beings. Fully autonomous weapons have been called unethical by their very nature, and giving machines the decision-making power to kill has been called the ultimate demoralization of war.

Of course, not everyone shares those moral and ethical points of view. But, our experience at Human Rights Watch has shown that most people have a visceral negative reaction to the notion of fully autonomous weapons. Most find it hard to believe that such a thing would even be contemplated. There is a provision in international law that takes into account this notion of general repugnance on the part of the public: the Martens Clause, which is articulated in the Geneva Conventions and elsewhere. Under the Martens Clause, fully autonomous weapons should comply with the “principles of humanity” and the “dictates of public conscience.” They would not appear to be able to do either.

Legal Objections and Accountability

Apart from the Martens Clause, there are serious doubts that fully autonomous weapons could comply with basic principles of international humanitarian law (IHL), such as distinction and proportionality. Technical experts and international lawyers agree that the current state of technology would not allow for such weapons to meet the requirements of international humanitarian law. There is of course no way of predicting what technology might produce many years from now, and I realize I am talking to a group of people who have a deep belief in the future possibilities for artificial intelligence, but there are strong reasons to be skeptical about compliance with IHL and human rights law in the future.

Much of the skepticism revolves around whether robots could replicate the innately human qualities of judgment and intuition necessary to comply with IHL, including judgment of an individual’s intentions, as well as subjective determinations. Compliance with the rule of proportionality, which prohibits attacks in which expected civilian harm outweighs anticipated military gain, would be especially difficult. Proportionality relies heavily on situational and contextual factors, which could change considerably with a slight alteration of the facts.  The US Air Force has called it “an inherently subjective determination.”

There are also serious concerns about the lack of accountability when fully autonomous weapons fail to comply with IHL in any particular engagement. Holding a human responsible for the actions of a robot that is acting autonomously could prove difficult, be it the operator, superior officer, programmer, or manufacturer. This “accountability gap” is a key objection to fully autonomous weapons, and one that HRW will be releasing a report on shortly.

Proliferation Concerns

As militaries move toward ever-greater autonomy in weapons systems, the likelihood of advancing to full autonomy increases – unless checked now. There is the real danger that if even one nation acquires these weapons, others may feel they have to follow suit in order to defend themselves and to avoid falling behind in a robotic arms race.

There is also the prospect that fully autonomous weapons could be acquired by repressive regimes or non-state armed groups with little regard for the law.  These weapons could be perfect tools of repression for autocrats seeking to strengthen or retain power.

Another type of proliferation concern is that such weapons would increase the likelihood of armed attacks, making resort to war more likely, as decision-makers would not have the same concerns about loss of soldiers’ lives. This could have an overall destabilizing effect on international security. Such weapons could also shift the burden of war from professional militaries to civilians, as the soldier would largely be taken off the battlefield, while conflicts would still be fought in or near populated areas.

Technical Problems

The US Department of Defense and others have cited a multitude of technical issues that would have to be overcome before fielding fully autonomous weapons. I will not go into these now, due to time constraints, and with the recognition that these are issues that many in this room likely believe could be overcome with time. But I will note that some roboticists have stressed that robot-on-robot engagements in particular are inherently unpredictable and could create unforeseeable harm to civilians.

Why a Ban is the Best Solution

While nearly everyone now expresses concern about killer robots, many –for the time being --  oppose a preemptive and comprehensive prohibition, as called for by the Campaign to Stop Killer Robots. Some say it is too early for such a call, and that we should wait to see where the technology takes us. Some say that restrictions would be more appropriate than a ban.  Some say that existing international humanitarian law will be sufficient to address the matter, perhaps with some additional guidance in the form of identifying “best practices.” Some have also argued for acquiring the weapons, but limiting their use to specific situations and missions.

The notion of a preemptive treaty is not new. It has been done. The best example is the 1995 CCW protocol that bans blinding laser weapons. After initial opposition from the US and others, states came to agree that the weapons would pose unacceptable dangers to soldiers and civilians. The Martens Clause was also widely invoked to justify the ban, with the weapons seen as counter to the dictates of public conscience. Nations also came to recognize that their militaries would be better off if no one had the weapons than if everyone had them. These same rationales apply to fully autonomous weapons.

More broadly the point of a preemptive treaty is to prevent future harm. With all the dangers and concerns associated with fully autonomous weapons, it would be irresponsible to take a “wait and see” approach and only deal with the issue after the harm has already occurred.

While some rightly point out that there is no “proof” that there cannot be a technological fix to the problems of fully autonomous weapons, it is equally true there is no proof that there can be.  Given the lack of scientific uncertainty that exists, and given the potential benefits of a new legally binding instrument, the precautionary principle in international law is directly applicable.  The principle suggests that the international community need not wait for scientific certainty, but could and should take action now. The principle holds that when there is uncertainty if an act will be harmful, the party committing the act bears the burden of proof that the act will not be harmful. It is not necessary to resolve scientific uncertainty in order for preventive measures to be warranted. Today’s scientific uncertainty combined with the potential threat to the civilian population from fully autonomous weapons, provides ample reason to undertake preventive measures in the form of an absolute ban.

Fully autonomous weapons represent a new category of weapons that could change the way wars are fought and pose serious risks to civilians. As such, they demand new, specific law that clarifies and strengthens existing IHL.  There are numerous examples of weapons treaties designed to strengthen IHL

A specific treaty banning a weapon is also the best way to stigmatize the weapon. Experience has shown that stigmatization has a powerful effect even on those who have not yet formally joined the treaty, inducing them to comply with the key provisions such as no use or production, lest they risk international condemnation.

If, instead of a ban, a regulatory approach restricted use to certain locations or to specific purposes, after the weapons entered into national arsenals, countries would likely be tempted to use them in other, possibly inappropriate, ways during the heat of battle or in dire circumstances.

Conclusion

The development, production, and use of fully autonomous weapons should be prohibited in the near future, in order to protect civilians and soldiers. If the ban is not embraced soon, it will be too late. The artificial intelligence community has an important role to play in bringing this about. The AAAI charter calls for the promotion of the “responsible use” of AI. There would be no better way than to oppose the development of fully autonomous weapons. An AAAI statement to that effect would be extremely influential. This is not a political issue to be avoided in the name of pure science, it is an issue of humanity for which we are all responsible. 

Your tax deductible gift can help stop human rights violations and save lives around the world.

Most Viewed