November 19, 2012

III. International Humanitarian Law Compliance Mechanisms

Proponents of fully autonomous weapons have recognized that such new robots would have to comply with international humanitarian law. Supporters have therefore proposed a variety of compliance mechanisms, two of which will be discussed below, that seek to prevent any violations of the laws of war.[106]

Arkin’s “Ethical Governor”

Ronald Arkin, a roboticist at the Georgia Institute of Technology, has articulated the “most comprehensive architecture” for a compliance mechanism.[107] Recognizing the importance of new weapons meeting legal standards, Arkin writes, “The application of lethal force as a response must be constrained by the LOW [law of war] and ROE [rules of engagement] before it can be employed by the autonomous system.”[108] He argues that such constraints can be achieved through an “ethical governor.”

The ethical governor is a complex proposal that would essentially require robots to follow a two-step process before firing. First, a fully autonomous weapon with this mechanism must evaluate the information it senses and determine whether an attack is prohibited under international humanitarian law and the rules of engagement. If an attack violates a constraint, such as the requirement that an attack must distinguish between combatant and noncombatant, it cannot go forward. If it does not violate a constraint, it can still only proceed if attacking the target is required under operational orders.[109] The evaluation at this stage consists of binary yes-or-no answers.

Under the second step, the autonomous robot must assess the attack under the proportionality test.[110] The ethical governor quantifies a variety of criteria, such as the likelihood of a militarily effective strike and the possibility of damage to civilians or civilian objects, based on technical data. Then it uses an algorithm that combines statistical data with “incoming perceptual information” to evaluate the proposed strike “in a utilitarian manner.”[111] The robot can fire only if it finds the attack “satisfies all ethical constraints and minimizes collateral damage in relation to the military necessity of the target.”[112]

Arkin argues that with the ethical governor, fully autonomous weapons would be able to comply with international humanitarian law better than humans. For example, they would be able to sense more information and process it faster than humans could. They would not be inhibited by the desire for self-preservation. They would not be influenced by emotions such as anger or fear. They could also monitor the ethical behavior of their human counterparts.[113] While optimistic, Arkin recognizes that it is premature to determine whether effective compliance with this mechanism is feasible.[114]

“Strong AI”

Another, even more ambitious approach strives to “match and possibly exceed human intelligence” in engineering international humanitarian law-compliant autonomous robots.[115] The UK Ministry of Defence has recognized that “some form of artificial intelligence [AI]” will be necessary to ensure autonomous weapons fully comply with principles of international humanitarian law.[116] It defines a machine with “true artificial intelligence” as having “a similar or greater capacity to think like a human” and distinguishes that intelligence from “complex and clever automated systems.”[117] John McGinnis, a Northwestern University law professor, advocates for the development of robotic weapons with “strong AI,” which he defines as the “creation of machines with the general human capacity for abstract thought and problem solving.”[118] McGinnis argues that “AI-driven robots on the battlefield may actually lead to less destruction, becoming a civilizing force in wars as well as an aid to civilization in its fight against terrorism.”[119]

Such a system presumes that computing power will approach the cognitive power of the human brain, but many experts believe this assumption may be more of an aspiration than a reality. Whether and when scientists could develop strong AI is “still very much disputed.”[120] While some scientists have argued that strong AI could be developed in the twenty-first century, so far it has been “the Holy Grail in AI research: highly desirable, but still unattainable.”[121] Even if the development of fully autonomous weapons with human-like cognition became feasible, they would lack certain human qualities, such as emotion, compassion, and the ability to understand humans. As a result, the widespread adoption of such weapons would still raise troubling legal concerns and pose other threats to civilians. As detailed in the following sections, Human Rights Watch and IHRC believe human oversight of robotic weapons is necessary to ensure adequate protection of civilians in armed conflict.

[106] For examples of other compliance mechanisms, such as the rule-based and advisory systems, value-sensitive design, and user-centered design, see Asaro, “Modeling the Moral User,” IEEE Technology and Society Magazine.

[107] Selmer Bringsjord and Joshua Taylor, “The Divine-Command Approach to Robot Ethics,” in Lin, Abney, and Bekey, eds., Robot Ethics, p. 92.

[108] Ronald C. Arkin, Governing Lethal Behavior in Autonomous Robots (Boca Raton, FL: CRC Press, 2009), p. 69 (emphasis in original).

[109] Ibid., pp. 183-184.

[110] Ibid., p. 185.

[111] Ibid., p. 187.

[112] Ibid.

[113] Ibid., pp. 29-30.

[114] Ibid., p. 211.

[115] Krishnan, Killer Robots, p. 47.

[116] UK Ministry of Defence, The UK Approach to Unmanned Aircraft Systems, p. 5-4.

[117] Ibid., p. 6-12.

[118] John O. McGinnis, “Accelerating AI,” Northwestern University Law Review , vol. 104, 2010, http://www.law.northwestern.edu/lawreview/colloquy/2010/12/LRColl2010n12McGinnis.pdf (accessed October 4, 2012), p. 369. Peter Asaro, who has written extensively on robotics in the military context, does not use the term strong AI but describes a similar concept. A robotic engineer would begin with an empirical analysis of ethical decision making by human soldiers, including observation of humans facing ethical dilemmas. After reaching an understanding of how human soldiers perform ethical calculations and reason in accordance with international humanitarian law, the engineer would then design an artificial intelligence system to mimic human thinking. Asaro, “Modeling the Moral User,” IEEE Technology and Society Magazine, pp. 22-23.

[119] McGinnis, “Accelerating AI,” Northwestern University Law Review, p. 368.

[120] Krishnan, Killer Robots, p. 48. See also email communication from Noel Sharkey, September 4, 2012 (saying that such technology will “remain science fiction—at least for the next 100 years or maybe always.”). For an alternative assessment, see McGinnis, “Accelerating AI,” Northwestern University Law Review, p.368. McGinnis argues there is “a substantial possibility of [it] becoming a reality.”

[121] Krishnan, Killer Robots, p. 48.