November 19, 2012

IV. Challenges to Compliance with International Humanitarian Law

An initial evaluation of fully autonomous weapons shows that even with the proposed compliance mechanisms, such robots would appear to be incapable of abiding by the key principles of international humanitarian law. They would be unable to follow the rules of distinction, proportionality, and military necessity and might contravene the Martens Clause. Even strong proponents of fully autonomous weapons have acknowledged that finding ways to meet those rules of international humanitarian law are “outstanding issues” and that the challenge of distinguishing a soldier from a civilian is one of several “daunting problems.”[122] Full autonomy would strip civilians of protections from the effects of war that are guaranteed under the law.

Distinction

The rule of distinction, which requires armed forces to distinguish between combatants and noncombatants, poses one of the greatest obstacles to fully autonomous weapons complying with international humanitarian law. Fully autonomous weapons would not have the ability to sense or interpret the difference between soldiers and civilians, especially in contemporary combat environments.

Changes in the character of armed conflict over the past several decades, from state-to-state warfare to asymmetric conflicts characterized by urban battles fought among civilian populations, have made distinguishing between legitimate targets and noncombatants increasingly difficult. States likely to field autonomous weapons first—the United States, Israel, and European countries—have been fighting predominately counterinsurgency and unconventional wars in recent years. In these conflicts, combatants often do not wear uniforms or insignia. Instead they seek to blend in with the civilian population and are frequently identified by their conduct, or their “direct participation in hostilities.” Although there is no consensus on the definition of direct participation in hostilities, it can be summarized as engaging in or directly supporting military operations.[123] Armed forces may attack individuals directly participating in hostilities, but they must spare noncombatants.[124]

It would seem that a question with a binary answer, such as “is an individual a combatant?” would be easy for a robot to answer, but in fact, fully autonomous weapons would not be able to make such a determination when combatants are not identifiable by physical markings. First, this kind of robot might not have adequate sensors. Krishnan writes, “Distinguishing between a harmless civilian and an armed insurgent could be beyond anything machine perception could possibly do. In any case, it would be easy for terrorists or insurgents to trick these robots by concealing weapons or by exploiting their sensual and behavioral limitations.”[125]

An even more serious problem is that fully autonomous weapons would not possess human qualities necessary to assess an individual’s intentions, an assessment that is key to distinguishing targets. According to philosopher Marcello Guarini and computer scientist Paul Bello, “[i]n a context where we cannot assume that everyone present is a combatant, then we have to figure out who is a combatant and who is not. This frequently requires the attribution of intention.”[126] One way to determine intention is to understand an individual’s emotional state, something that can only be done if the soldier has emotions. Guarini and Bello continue, “A system without emotion … could not predict the emotions or action of others based on its own states because it has no emotional states.”[127] Roboticist Noel Sharkey echoes this argument: “Humans understand one another in a way that machines cannot. Cues can be very subtle, and there are an infinite number of circumstances where lethal force is inappropriate.”[128] For example, a frightened mother may run after her two children and yell at them to stop playing with toy guns near a soldier. A human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmless, while a fully autonomous weapon might see only a person running toward it and two armed individuals.[129] The former would hold fire, and the latter might launch an attack. Technological fixes could not give fully autonomous weapons the ability to relate to and understand humans that is needed to pick up on such cues.

Proportionality

The requirement that an attack be proportionate, one of the most complex rules of international humanitarian law, requires human judgment that a fully autonomous weapon would not have. The proportionality test prohibits attacks if the expected civilian harm of an attack outweighs its anticipated military advantage.[130] Michael Schmitt, professor at the US Naval War College, writes, “While the rule is easily stated, there is no question that proportionality is among the most difficult of LOIAC [law of international armed conflict] norms to apply.”[131] Peter Asaro, who has written extensively on military robotics, describes it as “abstract, not easily quantified, and highly relative to specific contexts and subjective estimates of value.”[132]

Determining the proportionality of a military operation depends heavily on context. The legally compliant response in one situation could change considerably by slightly altering the facts. According to the US Air Force, “[p]roportionality in attack is an inherently subjective determination that will be resolved on a case-by-case basis.”[133] It is highly unlikely that a robot could be pre-programmed to handle the infinite number of scenarios it might face so it would have to interpret a situation in real time. Sharkey contends that “the number of such circumstances that could occur simultaneously in military encounters is vast and could cause chaotic robot behavior with deadly consequences.”[134] Others argue that the “frame problem,” or the autonomous robot’s incomplete understanding of its external environment resulting from software limitations, would inevitably lead to “faulty behavior.”[135] According to such experts, the robot’s problems with analyzing so many situations would interfere with its ability to comply with the proportionality test.

Those who interpret international humanitarian law in complicated and shifting scenarios consistently invoke human judgment, rather than the automatic decision making characteristic of a computer. The authoritative ICRC commentary states that the proportionality test is subjective, allows for a “fairly broad margin of judgment,” and “must above all be a question of common sense and good faith for military commanders.”[136] International courts, armed forces, and others have adopted a “reasonable military commander” standard.[137] The International Criminal Tribunal for the Former Yugoslavia, for example, wrote, “In determining whether an attack was proportionate it is necessary to examine whether a reasonably well-informed person in the circumstances of the actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack.”[138] The test requires more than a balancing of quantitative data, and a robot could not be programmed to duplicate the psychological processes in human judgment that are necessary to assess proportionality.

A scenario in which a fully autonomous aircraft identifies an emerging leadership target exemplifies the challenges such robots would face in applying the proportionality test. The aircraft might correctly locate an enemy leader in a populated area, but then it would have to assess whether it was lawful to fire. This assessment could pose two problems. First, if the target were in a city, the situation would be constantly changing and thus potentially overwhelming; civilian cars would drive to and fro and a school bus might even enter the scene. As discussed above, experts have questioned whether a fully autonomous aircraft could be designed to take into account every movement and adapt to an ever-evolving proportionality calculus. Second, the aircraft would also need to weigh the anticipated advantages of attacking the leader against the number of civilians expected to be killed. Each leader might carry a different weight and that weight could change depending on the moment in the conflict. Furthermore, humans are better suited to make such value judgments, which cannot be boiled down to a simple algorithm.[139]

Proponents might argue that fully autonomous weapons with strong AI would have the capacity to apply reason to questions of proportionality. Such claims assume the technology is possible, but that is in dispute as discussed above. There is also the threat that the development of robotic technology would almost certainly outpace that of artificial intelligence. As a result, there is a strong likelihood that advanced militaries would introduce fully autonomous weapons to the battlefield before the robotics industry knew whether it could produce strong AI capabilities. Finally, even if a robot could reach the required level of reason, it would fail to have other characteristics—such as the ability to understand humans and the ability to show mercy—that are necessary to make wise legal and ethical choices beyond the proportionality test.

Military Necessity

Like proportionality, military necessity requires a subjective analysis of a situation. It allows “military forces in planning military actions … to take into account the practical requirements of a military situation at any given moment and the imperatives of winning,” but those factors are limited by the requirement of “humanity.”[140] One scholar described military necessity as “a context-dependent, value-based judgment of a commander (within certain reasonableness restraints).”[141] Identifying whether an enemy soldier has become hors de combat, for example, demands human judgment.[142]  A fully autonomous robot sentry would find it difficult to determine whether an intruder it shot once was merely knocked to the ground by the blast, faking an injury, slightly wounded but able to be detained with quick action, or wounded seriously enough to no longer pose a threat. It might therefore unnecessarily shoot the individual a second time. Fully autonomous weapons are unlikely to be any better at establishing military necessity than they are proportionality.

Military necessity is also relevant to this discussion because proponents could argue that, if fully autonomous weapons were developed, their use itself could become a military necessity in certain circumstances. Krishnan warns that the development of “[t]echnology can largely affect the calculation of military necessity.”[143] He writes: “Once [autonomous weapons] are widely introduced, it becomes a matter of military necessity to use them, as they could prove far superior to any other type of weapon.”[144] He argues such a situation could lead to armed conflict dominated by machines, which he believes could have “disastrous consequences.” Therefore, “it might be necessary to restrict, or maybe even prohibit [autonomous weapons] from the beginning in order to prevent a dynamics that will lead to the complete automation of war that is justified by the principle of necessity.”[145] The consequences of applying the principle of military necessity to the use of fully autonomous weapons could be so dire that a preemptive restriction on their use is justified.

Martens Clause

Fully autonomous weapons also raise serious concerns under the Martens Clause. The clause, which encompasses rules beyond those found in treaties, requires that means of warfare be evaluated according to the “principles of humanity” and the “dictates of public conscience.”[146] Both experts and laypeople have an expressed a range of strong opinions about whether or not fully autonomous machines should be given the power to deliver lethal force without human supervision. While there is no consensus, there is certainly a large number for whom the idea is shocking and unacceptable. States should take their perspective into account when determining the dictates of public conscience.

Ronald Arkin, who supports the development of fully autonomous weapons, helped conduct a survey that offers a glimpse into people’s thoughts about the technology. The survey sought opinions from the public, researchers, policymakers, and military personnel, and given the sample size it should be viewed more as descriptive than quantitative, as Arkin noted.[147] The results indicated that people believed that the less an autonomous weapon was controlled by humans, the less acceptable it was.[148] In particular, the survey determined that “[t]aking life by an autonomous robot in both open warfare and covert operations is unacceptable to more than half of the participants.”[149] Arkin concluded, “People are clearly concerned about the potential use of lethal autonomous robots. Despite the perceived ability to save soldiers’ lives, there is clear concern for collateral damage, in particular civilian loss of life.”[150] Even if such anecdotal evidence does not create binding law, any review of fully autonomous weapons should recognize that for many people these weapons are unacceptable under the principles laid out in the Martens Clause.

Conclusion

To comply with international humanitarian law, fully autonomous weapons would need human qualities that they inherently lack. In particular, such robots would not have the ability to relate to other humans and understand their intentions. They could find it difficult to process complex and evolving situations effectively and could not apply human judgment to deal with subjective tests. In addition, for many the thought of machines making life-and-death decisions previously in the hands of humans shocks the conscience. This inability to meet the core principles of international humanitarian law would erode legal protections and lead fully autonomous weapons to endanger civilians during armed conflict. The development of autonomous technology should be halted before it reaches the point where humans fall completely out of the loop.

[122] Arkin, Governing Lethal Behavior in Autonomous Robots, pp. 126, 211.

[123] The notion of “direct participation in hostilities” is a complex legal concept upon which states have not reached a consensus definition. After wide consultation with experts from militaries, governments, academia, and nongovernmental organizations, the ICRC drafted a controversial set of guidelines for distinguishing among combatants, civilians participating in hostilities, and civilian noncombatants. See ICRC, “Interpretive Guidance on the Notion of Direct Participation in Hostilities under International Humanitarian Law,” May 2009, http://www.icrc.org/eng/resources/documents/publication/p0990.htm (accessed October 4, 2012). See also Jean-Marie Henckaerts and Louise Doswald-Beck, Customary International Humanitarian Law: Volume 1 (Cambridge, UK: ICRC, 2005), pp. 22-24.

[124] Protocol I, art. 51(3); Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II) , 1125 U.N.T.S. 609, entered into force December 7, 1978, art. 13(3).

[125] Krishnan, Killer Robots, p. 99.

[126] Marcello Guarini and Paul Bello, “Robotic Warfare: Some Challenges in Moving from Noncivilian to Civilian Theaters,” in Lin, Abney, and Bekey, eds., Robot Ethics, p. 131.

[127] Ibid., p. 138.

[128] Sharkey, “Killing Made Easy,” in Lin, Abney, and Bekey, eds., Robot Ethics, p. 118.

[129] This example is adapted from Guarini and Bello, “Robotic Warfare,” in Lin, Abney, and Bekey, eds., Robot Ethics, p. 130.

[130] Protocol I, art. 51(5)(b).

[131] Michael N. Schmitt, Essays on Law and War at the Fault Lines (The Hague: T.M.C. Asser Press, 2012), p. 190. According to Sharkey, “The military says [calculating proportionality] is one of the most difficult decisions that a commander has to make.” Sharkey, “Killing Made Easy,” in Lin, Abney, and Bekey, eds., Robot Ethics, p. 123.

[132] Asaro, “Modeling the Moral User,” IEEE Technology and Society Magazine, p. 21.

[133] Air Force Judge Advocate General’s Department, “Air Force Operations and the Law: A Guide for Air and Space Forces” first edition, 2002 , http://web.law.und.edu/Class/militarylaw/web_assets/pdf/AF%20Ops%20&%20Law.pdf (accessed October 4, 2012), p.27.

[134] Noel Sharkey, “Automated Killers and the Computing Profession,” Computer , vol. 40, issue 11 (2007), p. 122.

[135] Krishnan, Killer Robots, pp. 98-99.

[136] ICRC, Commentary on the Additional Protocols of 8 June 1977 to the Geneva Conventions of 12 August 1949 , http://www.icrc.org/ihl.nsf/COM/470-750073?OpenDocument (accessed October 31, 2012), pp. 679, 682.

[137] See, for example, Air Force Judge Advocate General’s Department, “Air Force Operations and the Law,” p.28; Jean-Marie Henckaerts and Louise Doswald-Beck, Customary International Humanitarian Law: Practice, Volume II, Part 1 (Cambridge, UK: ICRC, 2005), pp. 331-333.

[138] Prosecutor v. Stanislav Gali, International Tribunal for the Prosecution of Persons Responsible for Serious Violations of International Humanitarian Law Committed in the Territory of Former Yugoslavia since 1991 (ICTY), Case No. IT-98-29-T, Judgment and Opinion, December 5, 2003, http://www.icty.org/x/cases/galic/tjug/en/gal-tj031205e.pdf (accessed October 4, 2012), para. 58.

[139] US Army lawyer Major Jeffrey Thurnher recognizes that fully autonomous weapons could face challenges with the proportionality test and suggests they not be used for targets of opportunity. At the same time, he argues they might be appropriate for high value targets because greater “collateral damage” would be permissible in attacks on such targets. Thurnher does not examine the more complex scenario in which a high value target is identified as a target of opportunity. Thurnher, “No One at the Controls,” Joint Forces Quarterly, pp. 81-83.

[140] Françoise Hampson, “Military Necessity,” Crimes of War (online edition).

[141] Benjamin Kastan, “Autonomous Weapons Systems: A Coming Legal ‘Singularity’?” University of Illinois Journal of Law, Technology, and Policy (forthcoming 2013), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2037808&http://www.google.com/url?sa=t&rct=j&q=&esrc=s&frm=1&source=web&cd=2&ved=0CFcQFjAB&url=http%3A%2F%2Fpapers.ssrn.com%2Fsol3%2FDelivery.cfm%3Fabstractid%3D2037808&ei=YlQqUPmSDqLt0gGOuIGABQ&usg=AFQjCNGSsACd7U5u_YnRGm9QmNr7LMoouw&sig2=xQFjccuv6w6h4fwj0pc_8A (accessed October 30, 2012), p. 17.

[142] According to Article 41 of Protocol I, an individual is considered hors de combat if: “(a) he is in the power of an adverse Party; (b) he clearly expresses an intention to surrender; or (c) he has been rendered unconscious or is otherwise incapacitated by wounds or sickness, and therefore is incapable of defending himself; provided that in any of these cases he abstains from any hostile act and does not attempt to escape.” Protocol I, art. 41(2).

[143] Krishnan, Killer Robots, p. 91.

[144] Ibid.

[145] Ibid., p. 92.

[146] See, for example, Protocol I, art. 1(2).

[147] Arkin, Governing Lethal Behavior in Autonomous Robots, pp. 49, 52.

[148] Ibid., p. 53.

[149] Ibid.

[150] Ibid., p. 55.