May 12, 2014


Fully autonomous weapons, also called killer robots or lethal autonomous robots, do not yet exist, but weapons technology is moving rapidly toward greater autonomy. Fully autonomous weapons represent the step beyond remote-controlled armed drones. Unlike any existing weapons, these robots would identify and fire on targets without meaningful human intervention. They would therefore have the power to determine when to take human life.

The role of fully autonomous weapons in armed conflict and questions about their ability to comply with international humanitarian law have generated heated debate, but the weapons’ likely use beyond the battlefield has been largely ignored. These weapons could easily be adapted for use in law enforcement operations, which would trigger international human rights law. This body of law generally has even more stringent rules than international humanitarian law, which is applicable only in situations of armed conflict.

Because human rights law applies during peace and war, it would be relevant to all circumstances in which fully autonomous weapons might be used. This report examines the weapons’ human rights implications in order to ensure a comprehensive assessment of the benefits and dangers of fully autonomous weapons. The report finds that fully autonomous weapons threaten to violate the foundational rights to life and a remedy and to undermine the underlying principle of human dignity.

In 2013, the international community recognized the urgency of addressing fully autonomous weapons and initiated discussions in a number of forums. April marked the launch of the Campaign to Stop Killer Robots, an international civil society coalition coordinated by Human Rights Watch that calls for a preemptive ban on the development, production, and use of fully autonomous weapons. The following month, Christof Heyns, the special rapporteur on extrajudicial killing, submitted a report to the UN Human Rights Council that presented many objections to this emerging technology and called for national moratoria. In November, 117 states parties to the Convention on Conventional Weapons agreed to hold an experts meeting on what they refer to as lethal autonomous weapons systems in May 2014.

Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC) have co-published a series of papers that highlight the concerns about fully autonomous weapons. In November 2012, they released Losing Humanity: The Case against Killer Robots, the first major civil society report on the topic, and they have since elaborated on the need for a new international treaty that bans the weapons.[1] While these earlier documents focus on the potential problems that fully autonomous weapons pose to civilians in war, this report seeks to expand the discussion by illuminating the concerns that use of the weapons in law enforcement operations raise under human rights law.

Fully autonomous weapons have the potential to contravene the right to life, which the Human Rights Committee describes as “the supreme right.”[2] According to the International Covenant on Civil and Political Rights (ICCPR), “No one shall be arbitrarily deprived of his life.”[3] Killing is only lawful if it meets three cumulative requirements for when and how much force may be used: it must be necessary to protect human life, constitute a last resort, and be applied in a manner proportionate to the threat. Each of these prerequisites for lawful force involves qualitative assessments of specific situations. Due to the infinite number of possible scenarios, robots could not be pre-programmed to handle every circumstance. In addition, fully autonomous weapons would be prone to carrying out arbitrary killings when encountering unforeseen situations. According to many roboticists, it is highly unlikely in the foreseeable future that robots could be developed to have certain human qualities, such as judgment and the ability to identify with humans, that facilitate compliance with the three criteria.

The use of fully autonomous weapons also threatens to violate the right to a remedy. International law mandates accountability in order to deter future unlawful acts and punish past ones, which in turn recognizes victims’ suffering. It is uncertain, however, whether meaningful accountability for the actions of a fully autonomous weapon would be possible. The weapon itself could not be punished or deterred because machines do not have the capacity to suffer. Unless a superior officer, programmer, or manufacturer deployed or created such a weapon with the clear intent to commit a crime, these people would probably not be held accountable for the robot’s actions. The criminal law doctrine of superior responsibility, also called command responsibility, is ill suited to the case of fully autonomous weapons. Superior officers might be unable to foresee how an autonomous robot would act in a particular situation, and they could find it difficult to prevent and impossible to punish any unlawful conduct. Programmers and manufacturers would likely escape civil liability for the acts of their robots. At least in the United States, defense contractors are generally granted immunity for the design of weapons. In addition, victims with limited resources and inadequate access to the courts would face significant obstacles to bringing a civil suit.

Finally, fully autonomous weapons could undermine the principle of dignity, which implies that everyone has a worth deserving of respect. As inanimate machines, fully autonomous weapons could truly comprehend neither the value of individual life nor the significance of its loss. Allowing them to make determinations to take life away would thus conflict with the principle of dignity.

The human rights implications of fully autonomous weapons compound the many other concerns about use of the weapons. As Human Rights Watch and IHRC have detailed in other documents, the weapons would face difficulties in meeting the requirements of international humanitarian law, such as upholding the principles of distinction and proportionality, in situations of armed conflict. In addition, even if technological hurdles could be overcome in the future, failure to prohibit them now could lead to the deployment of models before their artificial intelligence was perfected and spark an international robotic arms race. Finally, many critics of fully autonomous weapons have expressed moral outrage at the prospect of humans ceding to machines control over decisions to use lethal force. In this context, the human rights concerns bolster the argument for an international ban on fully autonomous weapons.

[1] Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC), Losing Humanity: The Case against Killer Robots, November 2012,; Human Rights Watch and IHRC, “Q&A on Fully Autonomous Weapons,” October 2013,; Human Rights Watch and IHRC, “The Need for New Law to Ban Fully Autonomous Weapons,” November 2013,

[2] UN Human Rights Committee, General Comment 6, The Right to Life (Sixteenth session, 1982), Compilation of General Comments and General Recommendations Adopted by Human Rights Treaty Bodies, U.N. Doc. HRI/GEN/1/Rev.1 (1994), p. 6, para. 1. See also Manfred Nowak, U.N. Covenant on Civil and Political Rights: CCPR Commentary (Arlington, VA: N.P. Engel, 2005), p. 104.

[3] International Covenant on Civil and Political Rights (ICCPR), adopted December 16, 1966, G.A. Res. 2200A (XXI), 21 U.N. GAOR Supp. (No. 16) at 52, U.N. Doc. A/6316 (1966), 999 U.N.T.S. 171, entered into force March 23, 1976, art. 6(1).