November 19, 2012

VI. Problems of Accountability for Fully Autonomous Weapons

Given the challenges fully autonomous weapons present to adherence to international humanitarian law and the way they undermine other humanitarian protections, it is inevitable that they will at some point kill or injure civilians. When civilian casualties in armed conflict occur unlawfully, people want to see someone held accountable.[168] Accountability in such cases serves at least two functions: it deters future harm to civilians and provides victims a sense of retribution.[169] If the killing were done by a fully autonomous weapon, however, the question would become: whom to hold responsible. Options include the military commander, the programmer, the manufacturer, and even the robot itself, but none of these options is satisfactory. Since there is no fair and effective way to assign legal responsibility for unlawful acts committed by fully autonomous weapons, granting them complete control over targeting decisions would undermine yet another tool for promoting civilian protection.

The first option is to hold the military commanders who deploy such weapons responsible for the weapons’ actions on the battlefield.[170] Given that soldiers are autonomous beings, commanders are not held legally responsible for the actions of their subordinates except in very particular circumstances. It seems equally unfair to impose liability on commanders for their fully autonomous weapons. These weapons’ autonomy creates a “responsibility gap,” and it is arguably unjust to hold people “responsible for actions of machines over which they could not have sufficient control.”[171]

In certain situations, under the principle of “command responsibility,” a commander may be held accountable for war crimes perpetrated by a subordinate. It applies if the commander knew or should have known that the individual planned to commit a crime yet he or she failed to take action to prevent it or did not punish the perpetrator after the fact.[172] While this principle seeks to curb international humanitarian law violations by strengthening commander oversight, the doctrine is ill suited for fully autonomous weapons. On the one hand, command responsibility would likely apply if a commander was aware in advance of the potential for unlawful actions against civilians and still recklessly deployed a fully autonomous weapon. This application would be legally appropriate. On the other hand, a commander might not be able to identify a threat pre-deployment because he or she had not programmed the robot. If the commander realized once a robot was in the field that it might commit a crime, the commander would be unable to reprogram it in real time to prevent the crime because it was designed to operate with complete autonomy. Furthermore, as will be discussed in greater detail below, a commander cannot effectively punish a robot after it commits a crime. Thus except in cases of reckless conduct, command responsibility would not apply, and the commander would not be held accountable for the actions of a fully autonomous weapon.

An unlawful act committed by a fully autonomous weapon could be characterized as the result of a design flaw. The notion that a violation is a technical glitch points toward placing responsibility for the robot’s actions on its programmer or manufacturer, but this solution is equally unfair and ineffective. While the individual programmer would certainly lay the foundation for the robot’s future decisions, the weapon would still be autonomous. The programmer could not predict with complete certainty the decisions a fully autonomous robot might eventually make in a complex battlefield scenario.[173] As Robert Sparrow, a professor of political philosophy and applied ethics, writes, “[T]he possibility that an autonomous system will make choices other than those predicted and encouraged by its programmers is inherent in the claim that it is autonomous.”[174] To hold the programmer accountable, therefore, “will only be fair if the situation described occurred as a result of negligence on the part of the design/programming team.”[175] Furthermore, to be held criminally liable under international humanitarian law, the programmer would have had to cause the unlawful act intentionally.[176] Assuming any miscoding by the programmer was inadvertent or produced unforeseeable effects, there would be no option for accountability here.

Some have pointed to the product liability regime as a potential model for holding manufacturers responsible for international humanitarian law violations caused by fully autonomous weapons.[177] If manufacturers could be held strictly liable for flaws in these weapons, it would provide an incentive for those manufacturers to produce highly reliable weapons to avoid liability. Yet the product liability regime also falls short of an adequate solution. First, private weapons manufacturers are not typically punished for how their weapons are used, particularly if the manufacturers disclose the risks of malfunction to military purchasers up front.[178] It is highly unlikely that any company would produce and sell weapons, which are inherently dangerous, knowing the firm could be held strictly liable for any use that violates international humanitarian law. Second, product liability requires a civil suit, which puts the onus on victims. It is unrealistic to expect civilian victims of war, who are often poverty stricken and geographically displaced by conflict, to sue for relief against a manufacturer in a foreign court, even if legal rules would allow them to recover damages. Thus, the strict liability model would fail to create a credible deterrent for manufacturers or provide retribution for victims.

Holding accountable any of the actors described above—commanders, programmers, or manufacturers—is not only unlikely to be fair or effective, but it also does nothing to deter robots themselves from harming civilians through unlawful acts. Fully autonomous weapons operate, by definition, free of human supervision and so their actions are not dependent on human controllers.[179] Fully autonomous weapons also lack any emotion that might give them remorse if someone else were punished for their actions. Therefore, punishment of these other actors would do nothing to change robot behavior.

Looking into the future, some have argued that the remaining party—the fully autonomous weapon itself—might be held responsible for the unlawful killing of civilians. Krishnan writes, “At the moment, it would obviously be nonsensical to do this, as any robot that exists today, or that will be built in the next 10-20 years, is too dumb to possess anything like intentionality or a real capability for agency. However, this might change in a more distant future once robots become more sophisticated and intelligent.”[180] If a robot were truly autonomous, the robot might be punished by being destroyed or having its programming restricted in some way. Merely altering a robot’s software, however, is unlikely to satisfy victims seeking retribution.[181] Furthermore, unless the robot understood that it would be punished for violating the law, its decisions would not be influenced by the threat of accountability.[182]

These proposed methods would all fail to ensure accountability for the same reasons. They would neither effectively deter future violations of international humanitarian law nor provide victims with meaningful retributive justice. Taking human beings out of the loop of robotic decision making would remove the possibility for real accountability for unlawful harm to civilians, making it all the more important that fully autonomous weapons are never developed or used.

[168] There is also a generally recognized duty to investigate violations of international humanitarian law contained in the Geneva Conventions, the Additional Protocols, and customary international law, although the conventions only specify a duty to prosecute in the case of “grave breaches” or war crimes. See Michael N. Schmitt, “Investigating Violations of International Law in Armed Conflict,” Harvard National Security Journal, vol. 2 (2011), http://www.harvardnsj.com/wp-content/uploads/2011/01/Vol.-2_Schmitt_FINAL.pdf (accessed October 4, 2012), pp. 36-38. For lists of grave breaches, see Fourth Geneva Convention, art. 147; Protocol I, art. 85.

[169] Individual responsibility also stems from foundational notions of just war theory. Indeed, just war principles are formulated to govern individual decision-makers, who must accept responsibility for the deaths they cause in war. Some scholars have gone so far as to say that a state’s ability to attribute moral and legal responsibility to an individual actor is a requirement for fighting a just war. Robert Sparrow, “Killer Robots,” Journal of Applied Philosophy, vol. 24, no. 1 (2007), p. 67.

[170] Ibid., p. 70.

[171] See Andreas Matthias, “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata,” Ethics and Information Technology, vol. 6 (2004), pp. 176, 183 (emphasis in original).

[172] Protocol I, arts. 86(2), 87. See also O’Meara, “Contemporary Governance Architecture Regarding Robotics Technologies,” in Lin, Abney, and Bekey, eds., Robot Ethics, p. 166, n. 3 (citing the International Criminal Tribunal for the Former Yugoslavia).

[173] Sparrow, “Killer Robots,” Journal of Applied Philosophy, pp. 69-70.

[174] Ibid., p. 70.

[175] Ibid., p. 69.

[176] According to international humanitarian law, individuals can only be held liable for grave breaches of the Geneva Conventions if they commit the acts in question “willfully,” i.e., intentionally. See, for example, Protocol I, art. 85(3).

[177] Patrick Lin, George Bekey, and Keith Abney, “Autonomous Military Robotics: Risk, Ethics, and Design,” December 20, 2008, http://ethics.calpoly.edu/ONR_report.pdf (accessed October 4, 2012), pp. 55-56.

[178] Sparrow, “Killer Robots,” Journal of Applied Philosophy, p. 69.

[179] Krishnan, Killer Robots, p. 43.                                                                    

[180] Ibid., p. 105.

[181] Sparrow, “Killer Robots,” Journal of Applied Philosophy, p. 72.

[182] Some have analogized autonomous robots to child soldiers, as child soldiers have significant decision-making autonomy on the battlefield but also lack the full legal and moral responsibility of adult soldiers. As something less than full moral agents, fully autonomous robots would be similarly capable of taking human life with lethal force but incapable of fully comprehending the consequences of killing civilians, whether deliberately or by accident, making retribution for victims impossible. See Sparrow, “Killer Robots,” Journal of Applied Philosophy, pp. 73-74.