Shaking the Foundations

The Human Rights Implications of Killer Robots

Summary

Fully autonomous weapons, also called killer robots or lethal autonomous robots, do not yet exist, but weapons technology is moving rapidly toward greater autonomy. Fully autonomous weapons represent the step beyond remote-controlled armed drones. Unlike any existing weapons, these robots would identify and fire on targets without meaningful human intervention. They would therefore have the power to determine when to take human life.

The role of fully autonomous weapons in armed conflict and questions about their ability to comply with international humanitarian law have generated heated debate, but the weapons’ likely use beyond the battlefield has been largely ignored. These weapons could easily be adapted for use in law enforcement operations, which would trigger international human rights law. This body of law generally has even more stringent rules than international humanitarian law, which is applicable only in situations of armed conflict.

Because human rights law applies during peace and war, it would be relevant to all circumstances in which fully autonomous weapons might be used. This report examines the weapons’ human rights implications in order to ensure a comprehensive assessment of the benefits and dangers of fully autonomous weapons. The report finds that fully autonomous weapons threaten to violate the foundational rights to life and a remedy and to undermine the underlying principle of human dignity.

In 2013, the international community recognized the urgency of addressing fully autonomous weapons and initiated discussions in a number of forums. April marked the launch of the Campaign to Stop Killer Robots, an international civil society coalition coordinated by Human Rights Watch that calls for a preemptive ban on the development, production, and use of fully autonomous weapons. The following month, Christof Heyns, the special rapporteur on extrajudicial killing, submitted a report to the UN Human Rights Council that presented many objections to this emerging technology and called for national moratoria. In November, 117 states parties to the Convention on Conventional Weapons agreed to hold an experts meeting on what they refer to as lethal autonomous weapons systems in May 2014.

Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC) have co-published a series of papers that highlight the concerns about fully autonomous weapons. In November 2012, they released Losing Humanity: The Case against Killer Robots, the first major civil society report on the topic, and they have since elaborated on the need for a new international treaty that bans the weapons.[1] While these earlier documents focus on the potential problems that fully autonomous weapons pose to civilians in war, this report seeks to expand the discussion by illuminating the concerns that use of the weapons in law enforcement operations raise under human rights law.

Fully autonomous weapons have the potential to contravene the right to life, which the Human Rights Committee describes as “the supreme right.”[2] According to the International Covenant on Civil and Political Rights (ICCPR), “No one shall be arbitrarily deprived of his life.”[3] Killing is only lawful if it meets three cumulative requirements for when and how much force may be used: it must be necessary to protect human life, constitute a last resort, and be applied in a manner proportionate to the threat. Each of these prerequisites for lawful force involves qualitative assessments of specific situations. Due to the infinite number of possible scenarios, robots could not be pre-programmed to handle every circumstance. In addition, fully autonomous weapons would be prone to carrying out arbitrary killings when encountering unforeseen situations. According to many roboticists, it is highly unlikely in the foreseeable future that robots could be developed to have certain human qualities, such as judgment and the ability to identify with humans, that facilitate compliance with the three criteria.

The use of fully autonomous weapons also threatens to violate the right to a remedy. International law mandates accountability in order to deter future unlawful acts and punish past ones, which in turn recognizes victims’ suffering. It is uncertain, however, whether meaningful accountability for the actions of a fully autonomous weapon would be possible. The weapon itself could not be punished or deterred because machines do not have the capacity to suffer. Unless a superior officer, programmer, or manufacturer deployed or created such a weapon with the clear intent to commit a crime, these people would probably not be held accountable for the robot’s actions. The criminal law doctrine of superior responsibility, also called command responsibility, is ill suited to the case of fully autonomous weapons. Superior officers might be unable to foresee how an autonomous robot would act in a particular situation, and they could find it difficult to prevent and impossible to punish any unlawful conduct. Programmers and manufacturers would likely escape civil liability for the acts of their robots. At least in the United States, defense contractors are generally granted immunity for the design of weapons. In addition, victims with limited resources and inadequate access to the courts would face significant obstacles to bringing a civil suit.

Finally, fully autonomous weapons could undermine the principle of dignity, which implies that everyone has a worth deserving of respect. As inanimate machines, fully autonomous weapons could truly comprehend neither the value of individual life nor the significance of its loss. Allowing them to make determinations to take life away would thus conflict with the principle of dignity.

The human rights implications of fully autonomous weapons compound the many other concerns about use of the weapons. As Human Rights Watch and IHRC have detailed in other documents, the weapons would face difficulties in meeting the requirements of international humanitarian law, such as upholding the principles of distinction and proportionality, in situations of armed conflict. In addition, even if technological hurdles could be overcome in the future, failure to prohibit them now could lead to the deployment of models before their artificial intelligence was perfected and spark an international robotic arms race. Finally, many critics of fully autonomous weapons have expressed moral outrage at the prospect of humans ceding to machines control over decisions to use lethal force. In this context, the human rights concerns bolster the argument for an international ban on fully autonomous weapons.

Recommendations

Based on the threats that fully autonomous weapons would pose to civilians during both law enforcement operations and armed conflict, Human Rights Watch and IHRC recommend that states:

  • Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
  • Adopt national laws and policies that prohibit the development, production, and use of fully autonomous weapons.
  • Take into account the human rights implications of fully autonomous weapons when discussing the weapons in international fora.

I. State of the Debate

Fully autonomous weapons, also called killer robots or lethal autonomous robots, would revolutionize the “technology of killing.”[4] While traditional weapons are tools in the hand of a human being, fully autonomous weapons would make their own determinations about the use of lethal force. Unlike existing semi-autonomous armed drones, which are remotely controlled, fully autonomous weapons would have no human “in the loop.”

Fully autonomous weapon systems have yet to be created, but technology is moving rapidly in that direction. For example, the US X-47B prototype has taken off and landed on an aircraft carrier on its own, and a South Korean sentry robot can identify and shoot humans in the demilitarized zone with North Korea.[5] Neither is necessarily problematic at this stage—the former is not yet weaponized, and the latter requires a human to order it to fire—but such precursors demonstrate the evolution toward weapons with ever greater autonomy. Several other countries, including China, Israel, Russia, and the United Kingdom, have also allotted resources to developing this kind of technology.[6]

In 2013, the international community recognized the urgency of addressing fully autonomous weapons and initiated discussions in a number of forums. April marked the launch of the Campaign to Stop Killer Robots, an international civil society coalition coordinated by Human Rights Watch that calls for a preemptive ban.[7] The following month, Christof Heyns, the special rapporteur on extrajudicial killing, submitted a report to the UN Human Rights Council that presented many objections to this emerging technology; he called for national moratoria on production, transfer, acquisition, and use, and an independent panel to examine the issue more closely.[8] In November, 117 states parties to the Convention on Conventional Weapons agreed to hold an informal experts meeting on what they refer to as lethal autonomous weapons systems in May 2014.[9]

Commentators have articulated strong but diverging views on fully autonomous weapons. Proponents maintain that the weapons would reduce the risk to soldiers’ lives, decrease military expenditures, and be able to process information more quickly during operations. They also say that robots would be less apt to attack civilians because they would not act in fear or anger.[10] Opponents counter that fully autonomous weapons would likely endanger civilians. They argue that these weapons would lack compassion and empathy, important inhibitors to killing people needlessly, and they would not possess other human qualities, such as judgment, that are necessary to conduct the subjective assessments underlying many of international law’s protections. In addition, opponents argue that it is unclear whether any person could in practice be held legally responsible for a robot’s actions, and they assert that it is morally wrong to allow machines to make life-and-death determinations.[11]

Proponents of fully autonomous weapons contend that roboticists could theoretically develop technology with sensors to interpret complex situations and the ability to exercise near-human judgment.[12] Opponents question that assumption, emphasizing that such technology would not be possible in the foreseeable future, if ever. They also worry that deployment of such weapons could come well before the technological challenges were overcome and that allowing continued development would inevitably lead to a robotic arms race. Opponents believe that the existing legal, ethical, and scientific concerns raised by fully autonomous weapons outweigh speculation about the technology’s potential benefits.[13]

The majority of the debate so far has focused fully autonomous weapons in armed conflict, but once available, this technology could be adapted to a range of other contexts that can be grouped under the heading of law enforcement. For example, local police officers could potentially use such robots in crime fighting, the management of public protests, riot control, and other efforts to maintain law and order. State security forces could employ the weapons in attempts to control their opposition. Countries involved in international counter-terrorism could utilize them in scenarios that do not necessarily rise to the level of armed conflict as defined by international humanitarian law. Some law enforcement operations have legitimate ends, such as crime prevention; others, including violent suppression of peaceful protests, are inherently illegitimate. Fully autonomous weapons could be deployed in an operation regardless of its character.

The use of fully autonomous weapons in a law enforcement context would trigger the application of international human rights law. International human rights law applies in both peace and war, and it regulates the use of force in situations other than military operations and combat. In comparison to international humanitarian law, which governs military operations and combat and applies only during armed conflict, human rights law tends to have more stringent standards for regulating the use of lethal force, typically limiting it to where needed to defend human life and safety. Therefore, the challenges of developing a fully autonomous weapon that would comply with international law and still be useful are even greater when viewed through a human rights lens.

II. The Right to Life

The right to life is the bedrock of international human rights law. The Universal Declaration of Human Rights (UDHR), the foundational document of this body of law, introduced the concept in 1948.[14] The International Covenant on Civil and Political Rights, a cornerstone human rights treaty, codified it in 1966.[15] Article 6 of the ICCPR states, “Every human being has the inherent right to life. This right shall be protected by law.”[16] Regional treaties from Africa, the Americas, and Europe also have incorporated the right to life.[17] In its General Comment 6, the Human Rights Committee, treaty body for the ICCPR, describes the right to life as “the supreme right” because it is a prerequisite for all other rights.[18] It is non-derogable even in public emergencies that threaten the existence of a nation.[19]

The right to life prohibits arbitrary killing. The ICCPR declares, “No one shall be arbitrarily deprived of his life.”[20] The Human Rights Committee states that this right should be interpreted broadly. ICCPR negotiators understood “arbitrary” as having legal and ethical meaning; for them, it encompassed unlawful and unjust acts.[21] While some drafters proposed enumerating permissible killings in the ICCPR, the group ultimately decided not to do so in order to emphasize the prohibition on arbitrary deprivation of life.[22]

The Right to Life in Law Enforcement Situations

The right to life constrains the application of force in a range of situations outside of armed conflict. In its General Comment 6, the Human Rights Committee highlights the duty of states to prevent arbitrary killings by their security forces.[23] The United Nations set parameters for the use of force by such agents in the 1990 Basic Principles on the Use of Force and Firearms by Law Enforcement Officials (1990 Basic Principles) and the 1979 Code of Conduct for Law Enforcement Officials (1979 Code of Conduct). Adopted by a UN congress on crime prevention and the UN General Assembly respectively, these standards provide guidance for how to understand the scope of arbitrary killing in law enforcement situations.[24] They expressly note the importance of protecting human rights.[25]

Arbitrary killings under the right to life fail to meet three cumulative requirements for when and how much force may be used. To be lawful in law enforcement situations, force must be necessary, constitute a last resort, and be applied in a proportionate manner.[26] Fully autonomous weapons would face obstacles to meeting these criteria that circumscribe lawful force. They could not completely replicate the ability of human law enforcement officials to exercise judgment and compassion or to identify with other human beings, qualities that facilitate compliance with the law. These inherently human characteristics parallel those described in Article 1 of the UDHR, which states, “All human beings … are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.”[27] Due to the inadequacy of fully autonomous weapons in these areas, which is elaborated on below, the weapons could contravene the right to life, undermining the legitimacy of some law enforcement operations and exacerbating the harm caused by ones that are already illegitimate.

Necessity

Necessity is the first precondition for lawful force. The 1979 Code of Conduct states that law enforcement officials may employ force only when it is “strictly necessary” and “exceptional.”[28] The use of firearms is even more narrowly restricted to situations where it is essential to saving human lives. The 1990 Basic Principles limit officials’ use of firearms to:

self-defence or defence of others against the imminent threat of death or serious injury, to prevent the perpetration of a particularly serious crime involving grave threat to life, to arrest a person presenting such a danger or resisting their authority, or to prevent his or her escape.[29]

The 1990 Basic Principles add that “intentional lethal use of firearms may only be made when strictly unavoidable in order to protect life.”[30]

Fully autonomous weapons would lack human qualities that help law enforcement officials assess the seriousness of a threat and the need for a response. Both a machine and a police officer can take into account clearly visible signs, such as the presence of a weapon. However, interpreting more subtle cues whose meaning can vary by context, such as tone of voice, facial expressions, and body language, requires an understanding of human nature. A human officer can relate to another individual as a fellow human being, and this ability can help him or her read the other person’s intentions. Development of a fully autonomous weapon that could identify with the individual in the same way seems unlikely, and thus a robot might miss or misconstrue important clues as to whether a real threat to human life existed.[31]

In addition, the deployment of fully autonomous weapons in law enforcement situations could affect the actions of the individual posing a potential threat. He or she might not know how to behave when confronted with a machine rather than a human law enforcement officer. The individual might respond differently to a robot than to a human and as a result unintentionally appear threatening. A robot’s misinterpretation of the necessity of force could trigger an arbitrary killing in violation of the right to life.

Exhaustion of All Alternatives

Law enforcement officials are required to exhaust all alternatives before applying force. The 1990 Basic Principles limit force and firearms to cases where “other means remain ineffective or without any promise of achieving the intended result.”[32] In particular, they permit the use of firearms exclusively “when less extreme means are insufficient” to save a life.[33] These standards encourage states to develop non-lethal weapons and equip law enforcement officials with self-defense equipment in order to decrease the need to employ lethal force.[34]

Fully autonomous weapons’ inability to relate to humans could interfere with their ability to ensure that all means short of force are exhausted. The 1990 Basic Principles state that training law enforcement officials in “methods of persuasion, negotiation, and mediation” is important to decreasing the use of lethal force.[35] To deescalate a situation, human officers often appeal to a threatening individual’s reason, emotions, and interests. At present there is little prospect of developing a fully autonomous weapon that would be “intelligent” enough to be able to “talk down” an individual and defuse a standoff. A potential perpetrator would be more apt to connect with and be persuaded by a fellow human than an inanimate machine. Furthermore, it is unlikely that a fully autonomous weapon would be able to read a situation well enough to strategize about the best alternatives to use of force. On a more practical level, even if fully autonomous systems could be equipped with non-lethal weapons, the possibility of developing robots with the capability to restrain individuals or take prisoners seems remote. Fully autonomous weapons could thus escalate a situation before demonstrating that methods short of force are “ineffective or without any promise of achieving the intended result.”[36]

Proportionality

International law enforcement standards also specify how much force may be used when it is necessary and all other means have been exhausted. Force must be proportional to the threat involved. The 1990 Basic Principles require law enforcement officials to “act in proportion to the seriousness of the offence and the legitimate objective to be achieved.”[37] They oblige officials to “exercise restraint” and to minimize the harm they cause.[38] For example, in dispersing violent assemblies, officials may use lethal weapons only “to the minimum extent necessary.”[39] The 1979 Code of Conduct similarly highlights the principle of proportionality and states that law enforcement officials’ use of force must be “to the extent required for the performance of their duty.”[40]

Choosing an appropriate level of force could pose additional problems for fully autonomous weapons. First, these weapons would not possess judgment, which human officers rely on to balance the force of the response with the gravity of the perceived threat. The Oxford English Dictionary defines judgment as “the ability to make considered decisions or to arrive at reasonable conclusions or opinions on the basis of the available information.”[41] Judgment requires human capabilities of reason and reflection to interpret information and formulate an opinion. In law enforcement, judgment allows officers to assess complicated situations, taking into account such factors as a perpetrator’s background, mental state, and demands, and then to make case-by-case decisions about the minimum level of force necessary.

There are serious doubts that fully autonomous weapons could determine how much force is proportionate in a particular case. A designer could not pre-program a robot to deal with all situations because even a human could not predict the infinite possibilities. In addition, it would be difficult for a fully autonomous weapon to replicate the complex, subjective thinking processes required to judge unforeseen circumstances. Proportionality determinations involve more than quantitative analysis, and according to Christof Heyns, the UN special rapporteur on extrajudicial killing, “While robots are especially effective at dealing with quantitative issues, they have limited abilities to make the qualitative assessments that are often called for when dealing with human life.”[42]

Second, fully autonomous weapons would lack emotions that can generate the kind of restraint the 1990 Basic Principles oblige law enforcement officials to exercise. While fully autonomous weapons would not respond to threats in fear or anger, they would also not feel the “natural inhibition of humans not to kill or hurt fellow human beings.”[43] Studies of human soldiers have demonstrated that “there is within man an intense resistance to killing their fellow man.”[44] Compassion contributes to such a resistance, but it is hard to see how the capacity to feel compassion could be reproduced in robots. Human rights law does not require the exercise of compassion in particular, but the emotion can facilitate compliance with proportionality and serve as a safeguard against the disproportionate use of force.[45]

Abusive autocrats could take advantage of fully autonomous weapons’ lack of inherent restraint. For example, they could deploy these weapons to suppress protestors with a level of violence against which human security forces might rebel. Even the most hardened troops can eventually turn on their leader if ordered to fire on their own people. An abusive leader who resorted to fully autonomous weapons would be free of the fear that security forces would resist being deployed against certain targets.

It is highly unlikely therefore that fully autonomous weapons would be able to comply with the three requirements for lawful use of force—necessity, exhaustion of all alternatives, and proportionality. As a result, this technology would have the potential to arbitrarily deprive innocent people of their lives in law enforcement situations.

The Right to Life in Armed Conflict

Because the right to life is non-derogable even in situations that threaten a country’s existence, the right continues to apply during armed conflict.[46] Under circumstances of armed conflict, many look to international humanitarian law, the law that governs that specific context (lex specialis), to help interpret certain human rights provisions.[47] Therefore, in wartime, arbitrary killing refers to unlawful killing under international humanitarian law. In his authoritative commentary on the ICCPR, Manfred Nowak, former UN special rapporteur on torture, defines arbitrary killings in armed conflict as “those that contradict the humanitarian laws of war.”[48] The International Committee of the Red Cross (ICRC) Customary International Humanitarian Law Database states, “The prohibition of ‘arbitrary deprivation of the right to life’ under human rights law … encompasses unlawful killing in the conduct of hostilities.”[49] The ICRC finds unlawful killings go beyond those violations that are considered grave breaches or war crimes, such as direct attacks against civilians, to cover indiscriminate and disproportionate attacks.[50]

Civilian protection in international humanitarian law rests on the rules of distinction and proportionality. The former requires parties to a conflict to distinguish between combatants and civilians, and it outlaws means or methods of war that “cannot be directed at a specific military objective” because they are indiscriminate.[51] Proportionality, a subset of distinction, prohibits attacks in which expected civilian harm “would be excessive” compared to the anticipated military advantage.[52] “Proportionality” has a somewhat different, although not contradictory, meaning in international humanitarian law than it does in international human rights law. While under human rights law the term regulates the level of force acceptable to respond to a threat, under international humanitarian law it is used to judge whether an offensive or defensive military action is lawful. Both rules aim to protect human lives.

Some of the limitations of fully autonomous weapons that raise concerns in the law enforcement context, such as the inability to identify with humans or exercise human judgment, could also interfere with the weapons’ compliance with international law during armed conflict. First, there are serious doubts about whether fully autonomous weapons could distinguish adequately between combatants and civilians. Enemy combatants in contemporary conflicts often shed visible signs of military status, such as uniforms, which makes recognizing their intentions crucial to differentiating them from civilians.[53] As discussed above, it would be challenging to program fully autonomous weapons to understand human intentions. Because as machines they could not identify with humans, they would find it more difficult to recognize and interpret subtle behavioral clues whose meaning depends on context and culture.[54]

Second, fully autonomous weapons could face obstacles in making targeting choices that accord with the proportionality test. These weapons could not be pre-programmed to deal with every type of situation, and deciding how to weigh civilian harm and military advantage in unanticipated situations “involve[s] distinctively human judgement [sic],” something it is doubtful machines could replicate.[55]

Although the ability of fully autonomous weapons to process complex information might improve in the future, it seems implausible that they could ever be identical to humans. As a result, these weapons would find it difficult to meet the three criteria for use of force in law enforcement or comply with the rules of distinction and proportionality in armed conflict. Fully autonomous weapons would have the potential to kill arbitrarily and thus violate the right that underlies all others, the right to life.

III. The Right to a Remedy

The right to a remedy applies to violations of all human rights. The UDHR lays out the right, and Article 2(3) of the ICCPR requires states parties to “ensure that any person whose rights or freedoms … are violated shall have an effective remedy.”[56] Several regional human rights treaties incorporate it as well.[57]

The right to a remedy requires states to ensure individual accountability. It includes the duty to prosecute individuals for serious violations of human rights law. In its General Comment 31, the Human Rights Committee explains that the ICCPR obliges states parties to investigate allegations of wrongdoing and, if they find evidence of certain types of violations, to bring perpetrators to justice.[58] A failure to investigate and, where appropriate, prosecute “could in and of itself give rise to a separate breach of the Covenant.”[59] The 2005 Basic Principles and Guidelines on the Right to a Remedy and Reparation (2005 Basic Principles and Guidelines), standards adopted by the UN General Assembly, reiterate the obligation to investigate and prosecute. They also require states to punish individuals who are found guilty.[60]

The duty to prosecute applies to acts committed in law enforcement situations or armed conflict. The 2005 Basic Principles and Guidelines require states to prosecute gross violations of international human rights law, and the Human Rights Committee includes arbitrary killings among those crimes.[61] The 2005 Basic Principles and Guidelines also cover “serious violations of international humanitarian law,” the lex specialis for armed conflict.[62] The Fourth Geneva Convention and its Additional Protocol I, international humanitarian law’s key civilian protection instruments, similarly oblige states to prosecute “grave breaches,” i.e., war crimes, such as intentionally targeting civilians or knowingly launching a disproportionate attack.[63]

The right to a remedy is not limited to criminal prosecution. It encompasses reparations, which can include “restitution, compensation, rehabilitation, satisfaction and guarantees of non-repetition.”[64] States have the responsibility to provide many of these reparations. The 2005 Basic Principles and Guidelines, however, also oblige states to enforce judgments related to claims brought by victims against individuals or entities.[65] These standards “are without prejudice to the right to a remedy and reparation” for all violations of international human rights and humanitarian law, not just crimes.[66] Victims are thus entitled to some form of remedy, including but not limited to criminal prosecution.

Accountability serves a dual policy purpose. First, it seeks to deter future violations of the law. According to the Human Rights Committee, “the purposes of the Covenant would be defeated without an obligation … to take measures to prevent a recurrence of a violation.”[67] Second, a remedy serves as retribution, which provides victims the satisfaction that someone was punished for the harm they suffered. Dinah Shelton, author of an influential commentary on remedies, wrote that “punishment conveys to criminals and others that they wronged the victim and thus implicitly recognizes the victims [sic] plight and honors the victim’s moral claims.”[68] Applying these principles to the context of fully autonomous weapons, Christof Heyns, the special rapporteur on extrajudicial killing, declared that if responsibility for a robot’s actions is impossible, “its use should be considered unethical and unlawful as an abhorrent weapon.”[69]

In both law enforcement operations and armed conflict, the actions of fully autonomous weapons would likely fall within an accountability gap that would contravene the right to a remedy. It is unclear who would be liable when an autonomous machine makes life-and-death determinations about the use of force without meaningful human intervention. Assigning responsibility to the robot would make little sense. It could not be punished as a human can because it could not experience physical or psychological pain.[70] As discussed below, significant legal and practical obstacles exist to holding accountable the other most likely candidates—the superior officer or commander, programmer, and manufacturer—not the least because it will be difficult to foresee all situations the robot might face, and thus predict and account for its actions. In these cases, it is hard to see how one could achieve accountability under existing law in a way that was not unfair to any human who might be accused. The accountability gap that resulted would likely leave victims frustrated that no one was held accountable and punished for their suffering. It would also undermine the policy goals of deterrence and retribution.

If a superior officer, a programmer, or a manufacturer used or created a fully autonomous weapon with the clear intent to violate the right to life, they could most likely be held directly criminally liable. Criminal intent would be difficult to prove, however, and presumably be rare, at least among representatives of a state that generally abides by international law. Of greater concern would be a situation in which a fully autonomous weapon committed an arbitrary killing but there was no evidence a human intended or foresaw it. In such a case, there would be no human to hold directly responsible for the decision to attack, and indirect liability would be difficult to achieve.

Superior officers or commanders are generally not considered accountable for the actions of their subordinates because the latter make autonomous choices, as fully autonomous weapons would. The principle of superior responsibility, also known as command responsibility, is the primary exception to this rule. According to the 1990 Basic Principles:

Governments and law enforcement agencies shall ensure that superior officers are held responsible if they know, or should have known, that law enforcement officials under their command are resorting, or have resorted, to the unlawful use of force and firearms, and they did not take all measures in their power to prevent, suppress or report such use.[71]

Command responsibility similarly holds military commanders responsible for subordinates’ actions if they knew or should have known their subordinates committed or were going to commit a crime and failed to prevent the crime or punish the subordinates.[72]

The doctrine of superior responsibility is ill suited to establishing accountability for the actions of a fully autonomous weapon. First, it would be difficult for a superior officer to possess the requisite knowledge of an autonomous robot’s future actions. These actions would often be unforeseeable, especially given that international human rights law and international humanitarian law frequently require context-specific determinations and that in many cases superior officers might not have the scientific expertise to predict how such a complex piece of technology would function. Second, a superior officer who deployed a fully autonomous weapon would face obstacles to preventing or punishing a robot’s unlawful actions. The superior officer would be unable to prevent them if he or she could not foresee how the machine might act in different situations. The superior officer could not punish a fully autonomous weapon after the fact since, as discussed above, a robot cannot feel pain. Unless all of the elements of superior responsibility were met, a superior officer could not be held legally responsible for the actions of a fully autonomous weapon.

Alternatively, the law could try to hold a programmer or manufacturer responsible for the acts of a fully autonomous weapon. Civil tort law offers an approach other than prosecution, but it too would likely fail to ensure the right to a remedy. In the United States, for example, defense contractors are generally not found liable for harm caused by their products. Under the Federal Torts Claims Act, the government waives its immunity from civil suits in certain situations. The Supreme Court has applied this rule to contractors hired by the government. The waiver, however, is subject to the discretionary function exception and the combatant activities exception.[73] The first grants immunity for design defects in military equipment when:

 (a) the United States approved reasonably precise specifications; (b) the equipment conformed to those specifications; and (c) the supplier warned the United States about dangers in the use of the equipment that were known to the supplier but not to the United States.[74]

The second, the combat activities exception, states that contractors have “no duty of reasonable care … to those against whom force is directed as a result of authorized military action” and that litigation should not lead to the disclosure of secret weapon designs.[75] The programming and manufacturing of a fully autonomous weapon could fall under at least one of these exceptions, allowing the robot’s creators to escape liability. These two exceptions apply only in the United States, but they are significant because the United States is a leader in the development of autonomous weapons technology. Like the limits of superior responsibility, immunity under tort law could present an obstacle to holding individuals accountable for the actions of fully autonomous weapons.

Even without a legal gap, there are policy and practical problems with holding programmers and manufacturers accountable. Such liability could be unfair since even programmers and manufacturers might be unable to foresee the harm their fully autonomous weapons could cause in various situations. One commentator wrote that assigning them responsibility would be like “holding parents accountable for the actions of their children after they have left their care.”[76] Liability could also create a moral hazard whereby superior officers become more likely to deploy the weapons systems in dangerous situations because they believe programmers and manufacturers would bear any responsibility. Finally, civil suits are generally brought by victims, and it is unrealistic to think all victims would have the resources or adequate access to obtain justice.[77] This practical limitation is significant because civil litigation against those who program, manufacture, or use such robots would be a more likely avenue of redress than criminal prosecution. Lack of accountability for superior officers, programmers, and manufacturers would thus interfere with victims exercising their right to a remedy.

IV. Human Dignity

The concept of human dignity lies at the heart of international human rights law. The opening words of the UDHR assert that “recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world.”[78] In ascribing inherent dignity to all human beings, the UDHR implies that everyone has worth that deserves respect.[79] The ICCPR establishes the inextricable link between dignity and human rights, stating in its preamble that the rights it enumerates “derive from the inherent dignity of the human person.”[80] Regional treaties echo this position, and the Vienna Declaration of the 1993 World Human Rights Conference affirms that “all human rights derive from the dignity and worth inherent in the human person.”[81]

Fully autonomous weapons would possess the power to kill people yet be unable to respect their dignity. As inanimate machines, they could truly comprehend neither the value of individual life nor the significance of its loss. Allowing them to make determinations to take life away would thus conflict with the principle of dignity.[82]

Critics of fully autonomous weapons have expressed serious moral concerns related to these shortcomings. In his 2013 report to the Human Rights Council, Christof Heyns, the special rapporteur on extrajudicial killing, wrote,

[A] human being somewhere has to take the decision to initiate lethal force and as a result internalize (or assume responsibility for) the cost of each life lost in hostilities, as part of a deliberative process of human interaction…. Delegating this process dehumanizes armed conflict even further and precludes a moment of deliberation in those cases where it may be feasible. Machines lack morality and mortality, and should as a result not have life and death powers over humans.[83]

Heyns described this issue as an “overriding consideration” and declared that if fully autonomous weapons are found morally unacceptable, “no other consideration can justify the deployment of [fully autonomous weapons], no matter the level of technical competence at which they operate.”[84]

Conclusion

Fully autonomous weapons threaten to contravene foundational elements of human rights law. They could violate the right to life, a prerequisite for all other rights. Deficiencies in judgment, compassion, and capacity to identify with human beings could lead to arbitrary killing of civilians during law enforcement or armed conflict operations. Fully autonomous weapons could also cause harm for which individuals could not be held accountable, thus undermining the right to a remedy. Robots could not be punished, and superior officers, programmers, and manufacturers would all be likely to escape liability. Finally, as machines, fully autonomous weapons could not comprehend or respect the inherent dignity of human beings. The inability to uphold this underlying principle of human rights raises serious moral questions about the prospect of allowing a robot to take a human life.

Proponents of fully autonomous weapons might argue that technology could eventually help address the problems identified in this report, and it is impossible to know where science will lead.[85] In a 2013 public letter, however, more than 270 roboticists, artificial intelligence experts, and other scientists expressed their skepticism that adequate developments would be possible.[86] Given this uncertainty, the potential of fully autonomous weapons to violate human rights law, combined with other ethical, legal, policy, and scientific concerns, demands a precautionary approach. The precautionary principle of international law states that “[w]here there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures.”[87] When applied to fully autonomous weapons, this principle calls for preventive action to be taken now.

Human Rights Watch and IHRC recommend a preemptive ban on fully autonomous weapons, which would forestall the troubling consequences described in this report and have great humanitarian benefits. It would also help prevent an arms race, block proliferation, and stop development before countries invest so heavily in this technology that they do not want to give it up.[88] In determining the future of fully autonomous weapons, the international community should seriously consider their human rights implications and ensure the core components of this body of law receive protection.

Acknowledgments

This report was researched and written by Bonnie Docherty, senior researcher in the Arms Division of Human Rights Watch and senior clinical instructor at the International Human Rights Clinic (IHRC) at Harvard Law School. Alysa Harder, Francesca Procaccini, and Caroline Sacerdote, Harvard Law students in the IHRC, contributed to the research. Steve Goose, director of the Arms Division at Human Rights Watch, edited the report, and Mary Wareham, advocacy director of the Arms Division, provided additional feedback. Dinah PoKempner, general counsel, and Tom Porteous, deputy program director, also reviewed the report.

This report was prepared for publication by Andrew Haag, associate in the Arms Division, Kathy Mills, publications specialist, and Fitzroy Hepkins, administrative manager.