November 19, 2012

V. Other Threats to Civilian Protection

In addition to being unable to meet international humanitarian law standards, fully autonomous weapons would threaten other safeguards against civilian deaths and injuries. Two characteristics touted by proponents as making these robots superior to human soldiers—their lack of emotion and their ability to reduce military casualties—can in fact undermine civilian protection. First, delegating to machines the decision of when to fire on a target would eliminate the influence of human empathy, an important check on killing. Second, assigning combat functions to robots minimizes military casualties but risks making it easier to engage in armed conflict and shifts the burden of war onto the civilian population. Humans should therefore retain control over the choice to use deadly force. Eliminating human intervention in the choice to use deadly force could increase civilian casualties in armed conflict.

The Lack of Human Emotion

Proponents of fully autonomous weapons suggest that the absence of human emotions is a key advantage, yet they fail adequately to consider the downsides. Proponents emphasize, for example, that robots are immune from emotional factors, such as fear and rage, that can cloud judgment, distract humans from their military missions, or lead to attacks on civilians. They also note that robots can be programmed to act without concern for their own survival and thus can sacrifice themselves for a mission without reservations.[151] Such observations have some merit, and these characteristics accrue to both a robot’s military utility and its humanitarian benefits.

Human emotions, however, also provide one of the best safeguards against killing civilians, and a lack of emotion can make killing easier. In training their troops to kill enemy forces, armed forces often attempt “to produce something close to a ‘robot psychology,’ in which what would otherwise seem horrifying acts can be carried out coldly.”[152] This desensitizing process may be necessary to help soldiers carry out combat operations and cope with the horrors of war, yet it illustrates that robots are held up as the ultimate killing machines.

Whatever their military training, human soldiers retain the possibility of emotionally identifying with civilians, “an important part of the empathy that is central to compassion.”[153] Robots cannot identify with humans, which means that they are unable to show compassion, a powerful check on the willingness to kill. For example, a robot in a combat zone might shoot a child pointing a gun at it, which might be a lawful response but not necessarily the most ethical one. By contrast, even if not required under the law to do so, a human soldier might remember his or her children, hold fire, and seek a more merciful solution to the situation, such as trying to capture the child or advance in a different direction. Thus militaries that generally seek to minimize civilian casualties would find it more difficult to achieve that goal if they relied on emotionless robotic warriors.

Fully autonomous weapons would conversely be perfect tools of repression for autocrats seeking to strengthen or retain power. Even the most hardened troops can eventually turn on their leader if ordered to fire on their own people. A leader who resorted to fully autonomous weapons would be free of the fear that armed forces would rebel. Robots would not identify with their victims and would have to follow orders no matter how inhumane they were.

Several commentators have expressed concern about fully autonomous weapons’ lack of emotion. Calling for preservation of the role of humans in decisions to use lethal force, a US colonel who worked on the US Future Combat Systems program recognized the value of human feelings.[154] He said, “We would be morally bereft if we abrogate our responsibility to make the life-and-death decisions required on a battlefield as leaders and soldiers with human compassion and understanding.”[155] Krishnan writes:

One of the greatest restraints for the cruelty in war has always been the natural inhibition of humans not to kill or hurt fellow human beings. The natural inhibition is, in fact, so strong that most people would rather die than kill somebody…. Taking away the inhibition to kill by using robots for the job could weaken the most powerful psychological and ethical restraint in war. War would be inhumanely efficient and would no longer be constrained by the natural urge of soldiers not to kill.[156]

Rather than being understood as irrational influences and obstacles to reason, emotions should instead be viewed as central to restraint in war.

Making War Easier and Shifting the Burden to Civilians

Advances in technology have enabled militaries to reduce significantly direct human involvement in fighting wars. The invention of the drone in particular has allowed the United States to conduct military operations in Afghanistan, Pakistan, Yemen, Libya, and elsewhere without fear of casualties to its own personnel. As Singer notes, “[M]ost of the focus on military robotics is to use robots as a replacement for human losses.”[157] Despite this advantage, the development brings complications. The UK Ministry of Defence highlighted the urgency of more vigorous debate on the policy implications of the use of unmanned weapons to “ensure that we do not risk losing our controlling humanity and make war more likely.”[158] Indeed, the gradual replacement of humans with fully autonomous weapons could make decisions to go to war easier and shift the burden of armed conflict from soldiers to civilians in battle zones.

While technological advances promising to reduce military casualties are laudable, removing humans from combat entirely could be a step too far. Warfare will inevitably result in human casualties, whether combatant or civilian. Evaluating the human cost of warfare should therefore be a calculation political leaders always make before resorting to the use of military force. Leaders might be less reluctant to go to war, however, if the threat to their own troops were decreased or eliminated. In that case, “states with roboticized forces might behave more aggressively…. [R]obotic weapons alter the political calculation for war.”[159] The potential threat to the lives of enemy civilians might be devalued or even ignored in decisions about the use of force.[160]

The effect of drone warfare offers a hint of what weapons with even greater autonomy could lead to. Singer and other military experts contend that drones have already lowered the threshold for war, making it easier for political leaders to choose to use force.[161] Furthermore, the proliferation of unmanned systems, which according to Singer has a “profound effect on ‘the impersonalization of battle,’”[162] may remove some of the instinctual objections to killing. Unmanned systems create both physical and emotional distance from the battlefield, which a number of scholars argue makes killing easier.[163] Indeed, some drone operators compare drone strikes to a video game because they feel emotionally detached from the act of killing.[164] As D. Keith Shurtleff, Army chaplain and ethics instructor for the Soldier Support Institute at Fort Jackson, pointed out, “[A]s war becomes safer and easier, as soldiers are removed from the horrors of war and see the enemy not as humans but as blips on a screen, there is a very real danger of losing the deterrent that such horrors provide.”[165] Fully autonomous weapons raise the same concerns.

The prospect of fighting wars without military fatalities would remove one of the greatest deterrents to combat.[166] It would also shift the burden of armed conflict onto civilians in conflict zones because their lives could become more at risk than those of soldiers. Such a shift would be counter to the international community’s growing concern for the protection of civilians.[167] While some advances in military technology can be credited with preventing war or saving lives, the development of fully autonomous weapons could make war more likely and lead to disproportionate civilian suffering. As a result, they should never be made available for use in the arsenals of armed forces.

[151] Ronald C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Technical Report GIT-GVU-07-11, http://webcache.googleusercontent.com/search?q=cache:pJfQCaFIcvcJ:www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.ps.gz+&cd=1&hl=en&ct=clnk&gl=us&client=firefox-a (accessed October 4, 2012), pp. 6-7.

[152] Jonathan Glover, Humanity; A Moral History of the Twentieth Century (New Haven, CT: Yale University Press, 2000), p. 48.

[153] Hugo Slim, Killing Civilians: Methods, Madness and Morality in War (New York: Columbia University Press, 2008), p. 34.

[154] The Future Combat Systems project was a program, budgeted for $200 billion, to modernize the U.S. Army. It “involve[d] creating a family of 14 weapons, drones, robots, sensors and hybrid-electric combat vehicles connected by a wireless network.” The Washington Post described the vision behind this project as war that is “increasingly combat by mouse clicks. It's as networked as the Internet, as mobile as a cellphone, as intuitive as a video game.” Alec Klein, “The Army’s $200 Billion Makeover,” Washington Post, December 7, 2007, http://www.washingtonpost.com/wp-dyn/content/article/2007/12/06/AR2007120602836.html (accessed October 4, 2012). The project, which began in 2003, was cancelled in 2009. Marjorie Censer, “The High Cost of Savings,” Washington Post, May 25, 2012, http://www.washingtonpost.com/business/capitalbusiness/termination-fees-could-add-up-for-government/2012/05/25/gJQASyfFqU_print.html (accessed October 4, 2012).

[155] Sharkey, “Killing Made Easy,” in Lin, Abney, and Bekey, eds., Robot Ethics, p. 116.

[156] Krishnan, Killer Robots, p. 130. See also Sharkey, “Killing Made Easy,” in Lin, Abney, and Bekey, eds., Robot Ethics , p. 121 (“To be humane is, by definition, to be characterized by kindness, mercy, and sympathy, or to be marked by an emphasis on humanistic values and concern.”).

[157] Singer, Wired for War, p. 418.

[158] UK Ministry of Defence, The UK Approach to Unmanned Aircraft Systems, p. 5-9.

[159] Krishnan, Killer Robots, p. 150.

[160] Singer, “Robots at War: The New Battlefield,” Wilson Quarterly, p. 48.

[161] Singer, Wired for War, p. 319. Singer notes that “people with widely divergent worldviews come together on this point.”

[162] Ibid., p. 396 (quoting military historian John Keegan).

[163] See Slim, Killing Civilians, p. 218; Glover, Humanity: A Moral History of the Twentieth Century, p. 160; Singer, Wired for War, p. 395.

[164] Noel Sharkey, “Saying ‘No!’ to Lethal Autonomous Targeting,” Journal of Military Ethics, vol. 9, issue 4 (2010), p. 372.

[165] Singer, Wired for War, p. 396.

[166] Jason Borenstein, “The Ethics of Autonomous Military Robots,” Studies in Ethics, Law, and Technology, vol. 2, issue 1 (2008), p. 8.

[167] For examples of the growing concern for protection of civilians, see Harvard Law School’s International Human Rights Clinic, “Legal Foundations for ‘Making Amends’ to Civilians Harmed by Armed Conflict,” February 2012, http://harvardhumanrights.files.wordpress.com/2012/02/making-amends-foundations-paper-feb-2012-final.pdf (accessed October 30, 2012), p. 2.