Six weapons have been internationally banned: poison gas, biological weapons, chemical weapons, blinding lasers, antipersonnel landmines, and cluster munitions. The bans on the latter three were all achieved in the past 21 years, and, in each case, civil society was the driving force in bringing the issue to the forefront and convincing governments to agree to a ban. All three were rooted in the desire to better protect civilians during warfare and its aftermath, and to strengthen further international humanitarian law.
These efforts gave birth to what is now often called “humanitarian disarmament,” which puts top priority on the welfare of civilians through the creation of strong international standards. Humanitarian disarmament has also been characterized by the prominent role of non-governmental organizations (NGOs) and civil society more broadly, as well as partnership between different stakeholders.
The humanitarian disarmament concept dates to when the International Campaign to Ban Landmines was awarded the 1997 Nobel Peace Prize together with its then-coordinator for the unprecedented way in which it worked together with governments of small and medium-sized countries to establish the Mine Ban Treaty, described by the Nobel Committee as “a convincing example of an effective policy for peace.”
Three prominent humanitarian disarmament campaigns are currently underway. One, led by the International Network on Explosive Weapons, is aimed at alleviating the impact on civilians of the military practice that is causing the most civilian casualties and suffering today: the use of explosive weapons in populated areas. In particular, INEW is calling for a halt to the use of explosive weapons with wide area effects in populated areas.
A second, led by the International Campaign to Abolish Nuclear Weapons (ICAN), is aimed at eliminating the existing weapons that pose the gravest risk to the survival of the planet. ICAN focuses on the humanitarian impact of nuclear weapons and is calling for a new international treaty banning the weapons.
The third effort, led by the Campaign to Stop Killer Robots, is aimed at tackling the number one future threat to civilians and the most dangerous military development now underway: fully autonomous weapons systems, or “killer robots.” The Campaign is calling for an international treaty pre-emptively banning the development, production, and use of such weapons. These future weapons would select and attack without any human input or interaction.
Autonomous weapons are becoming widely acknowledged as the next technological revolution in warfare following the invention of gunpowder and the development of nuclear weapons. In 2013, it took just six months after the launch of the Campaign to Stop Killer Robots for states to agree to begin talks on the weapons at the Convention on Conventional Weapons (CCW). Yet the process is now seen as aiming low and going slow. A movement of various constituencies is converging on key concerns, but action is needed from capitals to kick the diplomatic process towards a constructive and binding outcome—a new CCW protocol banning the weapons.
What are killer robots?
At issue are weapons systems that would be capable of selecting targets and using force without any human input or interaction. These are often referred to as “human-out-of-the-loop” weapons. Several terms are used to describe them, but since 2013 countries at the CCW have settled on “lethal autonomous weapons systems.”
Armed drones currently in operation over Afghanistan, Iraq, Yemen, and other countries depend on a person to make the final decision whether to fire on a target. However, the autonomy of these and other weapons that have been deployed or are under development is growing quickly. Newer military drones, such as the MQ-9 Reaper, already can take off, land, and fly to designated points without human intervention.
Low-cost sensors and advances in artificial intelligence are making it increasingly practical to design weapons systems that would target and attack without human intervention. One example is seen in South Korea, which has developed automated stationary gun towers and placed them in the Demilitarized Zone with North Korea. According to one of its developers, the “original version had an auto-firing system but all of our customers asked for safeguards to be implemented. Technologically it wasn’t a problem for us but they were concerned the gun might make a mistake.” The Samsung SGR-1 sentry robot is equipped with sensors that detect movement and – in accordance with its current “on-the-loop” setting –send a signal to a command center where human soldiers determine if the individual identified poses a threat and decide whether to fire.
If the trend towards autonomy continues, the fear is that humans will start to fade out of the decision-making loop, first retaining only a limited oversight role, and then no role at all. Most acknowledge that fully autonomous weapons do not currently exist, but the capacity to develop them is expected to be available within a matter of years rather than decades. The US affirmed in November 2015 that “there is broad agreement that lethal autonomous weapon systems do not exist” and do not refer to “remotely piloted drones, nor precision-guided munitions or defensive systems.”
According to the International Committee of the Red Cross (ICRC), most existing weapons systems are overseen in real-time by a human operator and tend to be highly constrained in the tasks they are used for, the types of targets they attack, and the circumstances in which they are used. The ICRC describes an autonomous weapon system as one that has autonomy in its “critical functions,” meaning a weapon that can select (i.e. search for or detect, identify, track) and attack (i.e. intercept, use force against, neutralise, damage or destroy) targets without human intervention. The ICRC believes future autonomous weapon systems could have more freedom of action to determine their targets, operate outside tightly constrained spatial and temporal limits, and encounter rapidly changing circumstances. They could include systems operating over extended periods of time with no possibility of human intervention or supervision.
Some describe potential autonomous weapons that would be capable of human-level cognitive tasks, at least for narrow problems, as artificial intelligence weapons. According to Paul Scharre, these sophisticated future autonomous systems would exhibit some degree of “learning, adaptation, or evolutionary behavior.” Automated weapons operating according to more complex, rule-based systems are, according to Scharre, autonomous if the systems “exhibit goal-oriented behavior.”
A system may be autonomous in the sense of operating without human supervision, but its functionality may be quite simple and constrained, leading some to refer to it as an “automatic” or “automated” weapon rather than an autonomous weapon. Automatic systems follow specific pre-programmed commands with little room for variation in a “structured environment,” while autonomous weapons would have more freedom to determine their own actions in an “open and unstructured” environment.
For example, several countries employ weapons defense systems that are programmed to respond automatically to threats from incoming munitions. The Netherlands describes its ship-based Goalkeeper close-in weapons system and Patriot surface-to-air missiles as examples of automatic systems that can to a large extent operate automatically, noting: “The degree to which these systems are set to ‘automatic’ by their operators depends on the security environment and the threat situation. The greater the threat and the shorter the response time, the more automatically these systems need to operate in order to be effective, though they are continuously monitored by their operators.”
Landmines are generally considered an automatic, not autonomous, weapon because they are comprised of a simple, threshold-based system with easily predictable reflexive responses to external input.
Other automatic or automated land-based precursor systems include defensive artillery and autonomous ground systems capable of collaborating with other entities. The Iron Dome system used by the Israel Defense Force senses and intercepts incoming rockets and projectiles by using a mechanism that is programmed to respond automatically. Such automatic weapons are intended for defensive use against materiel targets as opposed to personnel, and sometimes allow a human override to function with a human-on-the-loop.
The New York Times has reported on the development of Lockheed Martin’s Long Range Anti-Ship Missile (LRASM), which the Department of Defense describes as “semi-autonomous” under the definitions established by its first policy directive on autonomy in weapons systems issued in November 2012. Yet, as John Markoff notes for the NYT, the missile is controversial because, “although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship.”
Can existing law address the killer robots challenge?
Military policy documents, especially from the United States, reflect clear plans to increase the autonomy of weapons systems. In its first report on fully autonomous weapons issued in November 2012, Human Rights Watch identified “precursors” to fully autonomous weapons systems under development in at least six countries.
Drivers towards fully autonomous weapons include increasing speed and distance of air, ground, and naval-based systems, concerns about insecure communications, the requirement for fewer personnel amid rising costs, the ability of such systems to intervene in difficult to access areas, and the need to stay ahead of possible adversaries in terms of technology.
Proponents tout a range of potential benefits including that fully autonomous weapons could decrease the need for soldiers on the battlefield and thereby save military lives. These weapons could have the ability to detect and attack targets with greater speed and precision than weapons directed by human beings, and fully autonomous weapons would not experience pain, hunger, or exhaustion.
These characteristics could entice militaries to deploy fully autonomous weapons despite their humanitarian drawbacks. Saving soldiers’ lives is a laudable goal, but it must be balanced against the likelihood that the danger for civilians would increase with the use of these weapons, and shift the burden of conflict onto civilians.
It’s clear that existing international humanitarian law and human rights law will apply to fully autonomous weapons systems, but there are serious concerns the weapons would not be able to fully comply with those complex and subjective rules, which require human understanding and judgment as well as compassion.
There is no certainty that fully autonomous weapons would have the capacity to distinguish between combatants, who can be targeted, and non-combatants, who cannot be. It would also be difficult to program fully autonomous weapons to carry out the proportionality test required for prohibiting attacks in which expected civilian harm outweighs anticipated military advantage. It would be particularly difficult to program a machine to determine that balance in every situation because there are an infinite number of possible situations. There are serious doubts that the weapons could exercise comparable judgment to assess proportionality of attacks in complex and evolving situations.
A robot could not be programmed to deal with every situation it might encounter in the field. So if it unlawfully killed a civilian instead of a soldier, who would be responsible? According to Human Rights Watch, the programmers, manufacturers, and military commanders would all be likely to escape liability. Under criminal law, if a military commander used the fully autonomous weapons intentionally to kill civilians, he or she would be held accountable. But it would be unfair and legally impossible to hold commanders accountable for situations when the robot acted in an unpredictable way that they could not foresee or prevent.
The potential for an accountability gap has serious consequences. Without accountability there is no way to deter future violations of international law, and no retribution for victims of past violations.
Scientists find it unlikely that autonomous weapons will meet legal requirements for the use of force due to an “absence of clear scientific evidence that robot weapons have, or are likely to have in the foreseeable future, the functionality required for accurate target identification, situational awareness or decisions regarding the proportional use of force.”
Under article 36 of the First Additional Protocol to the Geneva Conventions, states parties are obliged to determine whether new weapons and new methods of warfare are compatible with international law. A common understanding appears to be emerging that sees the acquisition or deployment of autonomous weapons system as prohibited if the relevant requirements of international law cannot be met.
The Campaign to Stop Killer Robots and others have found the profound and fundamental questions about autonomous weapons replacing the role of humans in the use of force and the taking of human life cannot be left to national weapons reviews alone. The ICRC warns that leaving it up to each state to determine the lawfulness and acceptability of the specific autonomous weapon systems they are developing or acquiring may risk inconsistent outcomes with, for example, some states applying limits to the use of such systems and others prohibiting their use altogether.
Fully autonomous weapons would threaten rights and principles under international human rights law as well as international humanitarian law. Fundamental guarantees, such as the right to life and the right to a remedy—which continue to apply during armed conflict as well as peacetime—could be at risk because the weapons could not be programmed to handle every situation. Fully autonomous weapons would also undermine human dignity, because as inanimate machines they could not understand or respect the value of life, yet they would have the power to determine when to take it away.
Increasingly, the Martens Clause, a customary rule of international humanitarian law that has appeared in international treaties and national military manuals for more than a century, is being cited in the debate over what to do about autonomous weapons. It states, “In cases not covered by this Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.”
Eventually, states may have to determine whether fully autonomous weapons meet the dictates of public conscience and, if they do, public views should be taken into account.
Affirming the principle of meaningful human control via a preemptive ban
This matter of whether autonomous weapons are ethically acceptable lies at the core of the debate over them. The fundamental question is not technological, but philosophical: “Will machines be given the power and authority to kill a human being?”
History has shown that for some weapons that significantly threaten non-combatants, including poison gas, biological weapons, chemical weapons, blinding lasers, antipersonnel mines, and cluster munitions, responsible governments have found it necessary to supplement the limits already in the international legal framework. Fully autonomous weapons have the potential to raise a comparable level of humanitarian concern.
Based on current and foreseeable technology, the ICRC says there are serious doubts about the ability of autonomous weapon systems to comply with IHL “in all but the narrowest of scenarios and the simplest of environments.” Because of this, the ICRC says “it seems evident that overall human control over the selection of, and use of force against, will continue to be required.”
The Campaign to Stop Killer Robots fundamentally objects to autonomous weapons because it rejects the notion that a machine should be permitted to take a human life on the battlefield or in policing, border control, or any circumstances. It seeks to affirm the positive principle that weapons systems, military attacks, and the use of violent force in policing should always be kept under meaningful human control. In order to do that, the Campaign to Stop Killer Robots calls on states and others to endorse its call for a preemptive ban on the development, production, and use of fully autonomous weapons systems.
The global coalition of non-governmental organizations does not oppose military use of autonomy and technologies relating to artificial intelligence, but it draws the line at the development of machines that could select and fire on targets without human intervention. Research and development activities should be banned if they are directed at technology that can be used exclusively for fully autonomous weapons or that is explicitly intended for use in such weapons.
A preemptive prohibition is both possible and desirable. The Convention on Conventional Weapons Protocol IV banning blinding lasers provides precedent for the creation of a protocol on lethal autonomous weapons systems.
The campaign has urged that states ban rather than regulate fully autonomous weapons because a complete prohibition is clearer and easier to enforce than partial regulations or restrictions and eliminates room for different interpretations. A complete prohibition creates greater stigma against the weapons and discourages their proliferation. Mere regulation would still allow governments to obtain them, and be tempted to use them illegally.
The serious international security and proliferation concerns relating to fully autonomous weapons include the real danger that if even one nation acquires these weapons, others may feel they have to follow suit in order to defend themselves and to avoid falling behind in a robotic arms race.
There is also the prospect that fully autonomous weapons could be acquired by repressive regimes or non-state armed groups with little regard for the law. These weapons could be perfect tools of repression and terror for autocrats.
Another proliferation concern is that such weapons would increase the likelihood of armed attacks, making resort to war more likely, as decision-makers would not have the same concerns about loss of soldiers’ lives. This could have a destabilizing effect on international security.
Since 2013, a broad range of individuals, organizations, and states have come forward to endorse the call for a preemptive ban on fully autonomous weapons systems, including:
• Bolivia, Cuba, Ecuador, Egypt, Ghana, The Holy See, Pakistan, State of Palestine, and Zimbabwe.
• The European Parliament, which adopted a resolution by a vote of 534–49 calling for a ban on “development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.”
• Twenty-one Nobel Peace Laureates, who expressed concern that “leaving the killing to machines might make going to war easier.”
• More than 120 religious leaders and organizations of various denominations, who view the weapons as “an affront to human dignity and to the sacredness of life.”
• More than 270 scientists in 37 countries, who warned how interactions by devices controlled by complex algorithms “could create unstable and unpredictable behavior … that could initiate or escalate conflicts, or cause unjustifiable harm to civilian populations.”
• More than 3,000 artificial intelligence (AI) and robotics experts, who said they have “no interest in building AI weapons and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.”
• Canadian technology company Clearpath Robotics, which said it had “chosen to value our ethics over potential future revenue” by pledging not to manufacture “weaponized robots that remove humans from the loop.”
• The ethics council of the $830 billion Norwegian government pension fund, which is looking at whether investments in companies making lethal autonomous weapons systems would violate the fund’s fundamental humanitarian principles.
• The UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the Special Rapporteur on the rights to freedom of peaceful assembly and of association, who in a report recommended, “Autonomous weapons systems that require no meaningful human control should be prohibited.”
Finding the way forward, faster toward an outcome
In May 2013, states first deliberated on autonomous weapons in a multilateral forum when a UN special rapporteur presented a report to the Human Rights Council that called for an immediate moratorium on the weapons. At the Convention on Conventional Weapons, also in Geneva, representatives from more than 85 countries met in May 2014 and April 2015 for a total of nine days to discuss lethal autonomous weapons systems, while a third meeting took place in April 2016.
Various UN agencies, international organizations, the Red Cross, and civil society groups coordinated by the Campaign to Stop Killer Robots are also participating throughout these deliberations and other meetings, which have helped sharpen the focus on lethal autonomous weapons systems and the concerns raised by them.
The process is helping to establish a common base of knowledge from which some nascent norms or principles are finding broad agreement, most notably the notion that human control must be retained over the operation of weapons systems. However, the informal talks are being criticized for “treading water” as they lack ambition, demonstrate no sense of urgency, and reflect the CCW’s usual “go slow and aim low” approach.
The Campaign to Stop Killer Robots calls on states to agree to a more formal and substantive process, with up to four weeks of work dedicated in 2017, when they decide on the way forward at the end of the year. The CCW Review Conference in December 2016 should agree to begin formal negotiations and aim to complete them within one or two years by producing a preemptive prohibition on lethal autonomous weapons systems.
Pakistan—the first and probably most ardent supporter of the call for a ban—will serve as president of the CCW Review Conference, where states must take crucial decisions on the process going forward and desired outcome.
Sustained and substantive national campaigning is required to make any headway reining in autonomous weapons, including legislative scrutiny in the form of debate, questions, and inquiries, and new legislation to ban the weapons. As Canada’s new Minister of Defense Harjit Sajjan commented to the Halifax Security Forum, legislative oversight is required as technology advances so that the burden of deciding on ethical dilemmas associated with their use is not placed on service men and women.
The US is the only country with a detailed written policy guiding it on fully autonomous weapons, which it says “neither encourages nor prohibits” development of lethal autonomous weapons systems.”
The US has described fully autonomous weapons as a potential “force multiplier.” Recent media interviews with top Pentagon officials provide some disturbing signs that the five-year policy may soon be replaced by guidelines allowing acquisition of human-out-of-the-loop weapons systems. However, officials also express support for retaining human control by stating, for example, that “we will not delegate lethal authority to a machine to make a decision.”
Israel has also spoken about the desirability and potential benefits of autonomous weapons systems, urging other countries to “keep an open mind” because it is “difficult” to foresee how developments may look in 10-30 years from now.”
No other state has publicly expressed interest in pursuing such systems and several have explicitly stated that they are not developing such systems and that have no plans to do so. But there have been extensive discussions about the potential benefits of such weapons, and many states are concerned that potential enemies will acquire fully autonomous weapons, making their continued development inevitable.
The trend toward ever-greater autonomy in warfare has been accelerating greatly in recent years, at an ever-faster pace. A strong, inclusive, and unified effort is needed now to prevent the unconstrained development of fully autonomous weapons systems.