Summary
Autonomous weapons systems present numerous risks to humanity, most of which infringe on fundamental obligations and principles of international human rights law. Such systems select and engage targets based on sensor processing rather than human inputs. The threats they pose are far reaching because of their expected use in law enforcement operations as well as during armed conflict. Given that international human rights law applies during both peacetime and wartime, it covers all circumstances relevant to the use and development of autonomous weapons systems.
This report examines how autonomous weapons systems contravene different human rights obligations and principles. It builds on the 2014 publication by Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC) entitled Shaking the Foundations: The Human Rights Implications of Killer Robots and expands upon it to address three additional rights obligations and principles.[1]
Human Rights Watch is a co-founder of Stop Killer Robots, a campaign of 270 civil society organizations. Together they and IHRC are working for a new international treaty that ensures meaningful human control over the use of force and avoids digital dehumanization. Such an instrument should prohibit autonomous weapons systems that inherently operate without meaningful human control or that target people. Regulations should ensure that all autonomous weapons systems not covered by the prohibitions operate only with meaningful human control.
States and other stakeholders have examined the challenges raised by autonomous weapons systems and ways to address them for more than a decade. They have primarily approached this topic through an international humanitarian law lens because discussions have taken place in meetings of the Convention on Conventional Weapons (CCW). Nevertheless, participants in that forum and others have recognized the applicability of international human rights law and expressed concerns that the use of autonomous weapons systems may violate it. This report aims to provide a much more in-depth analysis of the issue.
The development and use of autonomous weapons systems implicate at least six core obligations and principles of international human rights law:
Right to Life: The right not to be arbitrarily deprived of life requires that the use of force be necessary to achieve a legitimate aim and be applied in a proportionate manner. The right also requires that lethal force in particular may only be used as a last resort to protect human life. Autonomous weapons systems would face serious difficulties in meeting this three-part test. Obstacles include limitations in current technology and other technical limitations that suggest automated systems would never be able to approximate or surpass distinctly human abilities at completing certain types of tasks. Autonomous weapons systems could not identify subtle cues of human behavior to interpret the necessity of an attack, would lack the human judgment to weigh proportionality, and could not communicate effectively with an individual to defuse a situation and ensure that lethal force is a last option. As a result, their use of force would be arbitrary and unlawful.
In situations of armed conflict, international humanitarian law’s rules of distinction and proportionality can be used to determine what is “arbitrary” under the right to life. Although the specific rules are different, autonomous weapons systems would face similar challenges complying with international humanitarian law’s rules on the use of force in armed conflict as it does with international human rights law’s rules in peacetime.Right to Peaceful Assembly: The right to peaceful assembly, which is particularly relevant to the use of force in law enforcement situations, is essential to democracy and the enjoyment of other human rights. The use of autonomous weapons systems would be incompatible with this right. The systems, which would lack human judgment and could not be pre-programmed or trained to address every situation, would find it challenging to draw the line between peaceful and violent protesters. Force may only be used in exceptional circumstances to disperse assemblies that are unlawful or violent. Autonomous weapons systems, which apply force by definition, would be unlikely to have the capability to accurately assess when and how much force is appropriate. Finally, the use or threat of use of autonomous weapons systems, especially in the hands of abusive governments, could strike fear among protesters and thus cause a chilling effect on free expression and peaceful assembly.
Human Dignity: The principle of human dignity underlies all human rights, including the right to life, and establishes that people have inherent worth that is both universal and inviolable. Autonomous weapons systems would contravene that foundational principle due to their process of making life-and-death determinations. These machines would kill without the uniquely human capacity to understand or respect the true value of a human life because they are not living beings. Furthermore, they would instrumentalize and dehumanize their targets by relying on algorithms that reduce people to data points.
Non-discrimination: The principle of non-discrimination calls for the protection and promotion of human rights for all people, irrespective of race, sex and gender, ability, or other status under the law. Autonomous weapons systems would likely be discriminatory for multiple reasons. For example, biases of developers, including in their programming or choice of training data, could influence a system’s design and later decision-making. Once an autonomous weapon system using artificial intelligence (AI) is deployed, insufficient understanding of how and why the system makes determinations could prevent a human operator from scrutinizing recommended targets and intervening to correct errors before force is applied. As shown by other AI technology, algorithmic bias can disproportionately and negatively affect already marginalized groups. This report explores the potential differentiated effects of autonomous weapons systems on people of color, men and women, and persons with disabilities.
Right to Privacy: The right to privacy protects people from unlawful or arbitrary interferences in their personal life from the beginning of autonomous weapons systems’ creation. The development and use of autonomous weapons systems could violate the right because, if they or any of their component systems are based on AI technology, their development, testing, training, and use would likely require mass surveillance. To avoid being arbitrary, such data-gathering practices must be both necessary for reaching a legitimate aim and proportionate to the end sought. Mass surveillance fails both these requirements.
Right to Remedy: The right to remedy, triggered at the end of an autonomous weapon system’s lifecycle, obligates states to prosecute gross violations of international human rights law and serious violations of international humanitarian law and provide several forms of reparations. There are obstacles to holding individual operators criminally liable for the unpredictable actions of a machine they cannot understand, in particular because autonomous weapons systems that rely on AI may make determinations through opaque, “black box” processes. There are also legal challenges to finding programmers and developers responsible under civil law. Thus, the use of autonomous weapons systems would create an accountability gap.
Human actors, whether soldiers on the battlefield or police officers responding to law enforcement situations, also violate such human rights, sometimes egregiously. Unlike autonomous weapons systems, for which many of the concerns raised in this report are intrinsic and immutable, however, people can and do uphold the rights of others every day. People can also face, understand, and abide by the consequences of their actions when they do not. Machines cannot engage in any of those actions.
The infringements on these six human rights obligations and principles exemplify the range of problems raised by autonomous weapons systems in times of armed conflict and law enforcement operations. The first two are particularly relevant to the systems’ use of force (life and peaceful assembly). The next two relate to foundational cross-cutting principles (dignity and non-discrimination). The final ones show that infringements are also implicated at different stages of the systems’ lifecycle, including the development stage (privacy) and after an attack (remedy). While international human rights law is not the exclusive way to frame the concerns with autonomous weapons systems—they also present ethical, security, international humanitarian law, and other threats—human rights is a critical lens through which to look at this rapidly emerging technology.
Recommendations
To protect human rights and humanity, Human Rights Watch and the International Human Rights Clinic at Harvard Law School call on all states to:
Elaborate on the international human rights concerns raised by autonomous weapons systems, including the infringements on specific obligations and principles;
Begin negotiations as soon as possible on an international treaty to prohibit and regulate autonomous weapons systems;
Ensure that treaty negotiations take place in a forum that commits states to a common purpose, uses voting-based decision-making rules, follows clear and ambitious deadlines, and is inclusive of civil society; and
Use human rights law and principles to bolster the case for a new treaty and make sure that the treaty addresses the range of threats to human rights.
I. Autonomous Weapons Systems and the Need for a New International Treaty
Autonomous Weapons Systems and Their Risks
Autonomous weapons systems select and engage targets based on sensor processing, rather than human inputs. After initial user activation, they rely on software, often using algorithms, input from sensors like cameras, radar signatures, and heat shapes, and other data, to identify a target. Once the systems find a target, they fire or release their payload without approval of or review by a human operator. In other words, a machine rather than a human determines where, when, and against what force is applied.
Some weapons systems with varying degrees of autonomy have existed for years, but the types of targets, duration of operation, geographical scope, and environment in which they operate have been limited. They include missile defense systems such as Israel’s Iron Dome and the US Phalanx Close-In Weapon System. Other examples include armed drones and loitering munitions, or so-called one-way attack drones that stay in the air searching for a target before attacking.[2]
Technological advances and military investments are now spurring the rapid development of autonomous weapons systems that would operate without meaningful human control. Autonomous weapons systems, to which life-and-death determinations have been delegated, could also target people. Such autonomous weapons systems raise a host of ethical, moral, legal, accountability, and security concerns, including those under international human rights law.
Essential Treaty Elements
To address these concerns, Human Rights Watch and IHRC endorse the call, supported by at least 129 countries, for the urgent negotiation and adoption of a legally binding instrument to prohibit and regulate autonomous weapons systems.[3] The proposed elements for this international treaty follow precedents informed by previous disarmament treaties and international law.[4]
The legally binding instrument should have a broad scope. Its obligations should apply under any circumstances, including during situations of armed conflict and peacetime law enforcement operations. The treaty should also cover all autonomous weapons systems although its prohibitions and regulations will focus on the most problematic ones. By necessitating a thorough assessment of all systems that select and engage targets based on sensor processing, the treaty would seek to ensure that any subset posing legal and ethical concerns does not escape regulation. Its approach would help future proof the treaty against later technological developments of autonomous weapons systems.
The treaty should include three main obligations.[5] First, it should incorporate a general obligation to maintain meaningful human control over the use of force. This obligation would establish a principle to guide interpretation of the rest of the instrument, and its generality would close unexpected loopholes in other provisions. These factors are particularly important given that novel issues could arise as technology evolves. The general obligation should focus on the regulation of conduct (i.e., use of force) rather than a specific system in order to capture future, potentially unforeseen, technologies. The language “use of force” ensures the obligation would apply to situations of armed conflict and law enforcement operations, both of which are covered by international human rights law.
Second, the legally binding instrument should prohibit the development, production, and use of autonomous weapons systems that by their nature pose fundamental moral or legal problems. More specifically, it should ban autonomous weapons systems that operate without meaningful human control and those that target people. Many of the threats posed by autonomous weapons systems, including legal, accountability, and ethical ones, are attributable to the absence of meaningful human control. In addition, allowing autonomous weapons systems to identify and apply force to people through the use of target profiles, i.e., proxies like weight, heat, or sound, would lead to “digital dehumanization,” violations of human dignity, and discrimination. The dual prohibitions would help prevent harm to civilians and other protected persons.
Third, the treaty should include positive obligations, i.e., regulations, to ensure that meaningful human control is maintained in the use of all autonomous weapons systems that are not prohibited. The intersecting components of meaningful human control on which the regulations could be based include:
Decision-making components (e.g., operators must understand how an autonomous system works);
Technological components (e.g., predictability, reliability, the ability of the operator to intervene); and
Operational components (e.g., geographical and temporal constraints on area of operation, limits on types of targets).
According to Article 36, a UK-based nongovernmental organization, meaningful human control addresses “when, where and how weapons are used; what or whom they are used against; and the effects of their use.”[6]
The threats autonomous weapons systems pose to international human rights law protections, the focus of this report, demonstrate the need for a new legally binding instrument. They show why meaningful human control is needed over weapons systems and why autonomous weapons systems should not be allowed to target people. A treaty with the elements proposed above could address the range of concerns that autonomous weapons systems raise, including those associated with international human rights law.
Diplomatic Efforts to Address Concerns
Most of the debate about how to address concerns over autonomous weapons systems has since 2014 taken place under the auspices of the Convention on Conventional Weapons (CCW). There, states have focused on the role of autonomous weapons systems in armed conflict and whether the systems could comply with international humanitarian law. It is already apparent that these weapons systems will be used beyond the battlefield in law enforcement operations. Use in both armed conflict and law enforcement would require compliance with international human rights law.
Many states, international organizations, and civil society groups have recognized that human rights law is important to the autonomous weapons systems discussion. Christof Heyns, the late United Nations special rapporteur on extrajudicial, summary or arbitrary executions, first raised the alarm in a report to the UN Human Rights Council in 2013.[7] Since then, multilateral meetings and numerous UN bodies and experts, including the secretary-general, the Human Rights Committee, the General Assembly, and special rapporteurs, have stressed that the use of autonomous weapons systems would pose threats to international human rights law, and some have argued they should be prohibited (see Appendix). At CCW meetings, states also have noted the applicability of human rights law and expressed concerns that autonomous weapons systems may violate it.
This report explains how autonomous weapons systems will violate six specific human rights obligations and principles. Through this analysis, it seeks to inform the current international debate and show the need for a new treaty prohibiting and regulating autonomous weapons systems.
II. Right to Life
The right to life is the bedrock of international human rights law. The Universal Declaration of Human Rights, the foundational document of this body of law, gave prominence to the right in 1948.[8] The International Covenant on Civil and Political Rights (ICCPR), a cornerstone human rights treaty, codified it in 1966,[9] stating in Article 6: “Every human being has the inherent right to life.”[10] Regional treaties from Africa, the Americas, and Europe also have incorporated the right to life.[11] In its 2019 General Comment No. 36 on the right to life, the UN Human Rights Committee, the independent expert body that provides authoritative interpretation of the ICCPR, describes the right to life as “the supreme right” because it is a prerequisite for all other rights.[12] It is non-derogable, meaning it cannot be suspended, even in “situations of armed conflict and other public emergencies that threaten the life of the nation.”[13]
The right to life prohibits arbitrary killing. The ICCPR states: “No one shall be arbitrarily deprived of his life.”[14] The Human Rights Committee said that this right should be interpreted broadly.[15] ICCPR negotiators understood “arbitrary” as having legal and ethical meaning; for them, it encompassed unlawful and unjust acts.[16] General Comment No. 36 notes that deprivation of life is arbitrary when it is against the law and when it suffers from “inappropriateness, injustice, lack of predictability and due process of law.”[17] The comment adds that evaluations of arbitrariness must also include “elements of reasonableness, necessity and proportionality.”[18]
The Human Rights Committee specifically addressed the development and use of autonomous weapons systems in General Comment No. 36. It states that “the development of autonomous weapons systems lacking in human compassion and judgment raises difficult legal and ethical questions concerning the right to life.” The Committee recommended that, as a result, “such weapons systems should not be developed and put into operation, either in times of war or in times of peace,” unless it is established that their use conforms with the right to life.[19]
Right to Life in Law Enforcement Situations
The right to life constrains the application of force in a range of situations outside of armed conflict. In General Comment No. 36, the Human Rights Committee highlights the duty of states to prevent arbitrary killings by their law enforcement officers.[20] The UN set parameters for the use of force by law enforcement in the 1979 Code of Conduct for Law Enforcement Officials (1979 Code of Conduct) and the 1990 Basic Principles on the Use of Force and Firearms by Law Enforcement Officials (1990 Basic Principles). Adopted by the UN General Assembly and a UN congress on crime prevention, respectively, these standards provide guidance for understanding the scope of arbitrary killing in law enforcement situations.[21] They expressly note the importance of protecting human rights.[22]
To avoid being arbitrary under the right to life, killings in law enforcement need to meet three cumulative requirements for when and how much force may be used. The use of force must be both necessary and applied in a proportionate manner. Use of lethal force, including the use of firearms, should be employed only as a last resort; it is “an extreme measure” adopted exclusively to protect life or prevent serious injuries.[23] Autonomous weapons systems would face significant obstacles to meeting these criteria, particularly because they lack the qualities of “human compassion and judgment,” which are necessary to uphold the right to life.[24]
Necessity
Necessity is the first precondition for the lawful use of force. The 1979 Code of Conduct states that law enforcement officials may employ force only when it is “strictly necessary” and “exceptional.”[25] The use of firearms is even more narrowly restricted to situations when human lives are at risk. The 1990 Basic Principles limit officials’ use of firearms to:
self-defence or defence of others against the imminent threat of death or serious injury, to prevent the perpetration of a particularly serious crime involving grave threat to life, to arrest a person presenting such a danger or resisting their authority, or to prevent his or her escape.[26]
Autonomous weapons systems lack an understanding of context and the ability to carry out nuanced analysis and complex reasoning, qualities that help law enforcement officers, as humans, assess the seriousness of a threat and the necessity of a response.[27] Governments and private companies have developed and are marketing video analytics systems that they claim can identify weapons and other specified objects in a video feed. Such systems, however, may not correctly accumulate and parse contextual data, such as tone of voice or facial expressions, which creates risk of misidentification and thus misinterpretation of the danger.[28] In addition, a system, “could not predict the emotions or action of others” because it does not have emotions itself.[29] As a result, at least at this point, an autonomous weapon system might miss or misconstrue important clues as to whether a real threat to human life existed.[30]
While technology that attempts to approximate these tasks is being developed, it seems unlikely that an autonomous weapon system could ever be developed to accurately interpret contextual information about humans as well as a human does.[31] For example, police officers can generally relate to an individual as a fellow human being, which can help them read the other person’s intentions. Even if the evolution of technology overcame the hurdles to assessing a target, other human rights challenges, including those associated with violations of dignity discussed in Chapter IV, would remain.
The deployment of autonomous weapons systems in law enforcement situations could also affect the actions of the individual posing a potential threat. The individual might not know how to behave when confronted with a machine rather than a law enforcement officer.[32] The individual might respond differently to an autonomous weapon system than to a human being, including by displaying fear or anxiety. As a result, the machine might unintentionally determine the individual was displaying threatening behavior. A robot’s misinterpretation of the necessity of force could trigger an arbitrary killing in violation of the right to life.
Proportionality
International law enforcement standards further specify that any use of force must be proportional to the threat involved. The 1990 Basic Principles require law enforcement officials to “act in proportion to the seriousness of the offence and the legitimate objective to be achieved.”[33] They obligate officials to “exercise restraint” and to minimize the harm they cause.[34] For example, as discussed in the next chapter, in dispersing violent assemblies, officials may use lethal weapons only “to the minimum extent necessary.”[35] The 1979 Code of Conduct similarly highlights the principle of proportionality and states that law enforcement officials’ use of force must only be “to the extent required for the performance of their duty.” Like the 1990 Basic Principles, the Code’s commentary states that force must be proportionate to “the legitimate objective to be achieved.”[36]
Determining an appropriate level of force could be problematic for autonomous weapons systems. First, these systems would not be able to approximate human judgment, which police officers rely on to balance the force of the response with the gravity of the perceived threat.[37] Judgment can be defined as “the ability to make considered decisions or to arrive at reasonable conclusions or opinions on the basis of the available information.”[38] Judgment requires human capabilities of reason and reflection to interpret information and formulate an opinion. In law enforcement, judgment allows officers to assess complicated and quickly evolving situations, taking into account such factors as a suspect’s background, mental state, and demands, and then to make case-by-case decisions to ensure the minimum level of force is used.
It appears doubtful that autonomous weapons systems could determine how much force is proportionate in a particular case. An autonomous weapon system would be limited by its programming and, in the case of systems using AI, its training and related data.[39] Autonomous weapons systems would be designed to react and adapt to changing environments. It seems unlikely, however, that these systems would be able to do so in a way that could determine the appropriate level of force to apply because they would lack the ability to approximate the complex, subjective thinking processes required to assess unforeseen circumstances.[40] Proportionality determinations involve more than quantitative analysis, and according to Christof Heyns, “While robots are especially effective at dealing with quantitative issues, they have limited abilities to make the qualitative assessments that are often called for when dealing with human life.”[41]
Second, autonomous weapons systems would lack the emotions necessary to generate the kind of restraint that the 1990 Basic Principles call on law enforcement officials to exercise. While autonomous weapons systems would not respond to threats in fear or anger, they also would not share the “natural inhibition of humans not to kill or hurt fellow human beings.”[42] As will be discussed more in Chapter IV, compassion contributes to such a resistance, and autonomous weapons systems would lack the emotions and emotional intelligence necessary to exercise this restraint. The right to life does not require the exercise of compassion in particular, but the emotion, along with judgment, can facilitate compliance with proportionality and safeguard against the disproportionate use of force.[43]
Extreme Measure of Last Resort
Law enforcement officials are expected, as far as possible, to use non-violent means before resorting to the use of force. The 1990 Basic Principles limit force and firearms to cases where “other means remain ineffective or without any promise of achieving the intended result.”[44] Both those principles and the commentary to Article 3 of the 1979 Code of Conduct permit the use of firearms only “when less extreme means are insufficient” to save a life.[45] According to the Basic Principles, “intentional lethal use of firearms may only be made when strictly unavoidable in order to protect life.”[46] These standards encourage states to equip law enforcement officials with self-defense equipment and develop less-lethal weapons in order to decrease the need to employ lethal force.[47]
The limitations in autonomous weapons systems’ ability to interpret human behavior and context could interfere with their ability to ensure that all means short of lethal force are exhausted. In addition, the 1990 Basic Principles state that training law enforcement officials in “methods of persuasion, negotiation and mediation” is important to decreasing the use of lethal force.[48] To de-escalate a situation, police officers often appeal to a threatening individual’s reason, emotions, and interests. At present there is little prospect of developing an autonomous weapon system that could carry out tasks of conflict mediation and resolution and defuse a standoff as a human could.[49]
Right to Life in Armed Conflict
Under international human rights law, the right to life is non-derogable, applicable during armed conflict as well as in law enforcement operations.[50] Under circumstances of armed conflict, however, international humanitarian law governs the conduct of hostilities as lex specialis, the more specific rules.[51] In wartime, arbitrary killing refers to unlawful killing under international humanitarian law.[52] In his authoritative commentary on the ICCPR, Manfred Nowak defines arbitrary killings in armed conflict as “those that contradict the humanitarian laws of war.”[53] The International Committee of the Red Cross (ICRC) states that, “[t]he prohibition of ‘arbitrary deprivation of the right to life’ under human rights law … encompasses unlawful killing in the conduct of hostilities.”[54] The ICRC finds unlawful killings go beyond those violations that are considered grave breaches or war crimes, such as direct attacks against civilians, to cover attacks that indiscriminately or disproportionately harm civilians.[55]
Civilian protection in international humanitarian law rests on the principles of distinction and proportionality. The former requires parties to a conflict to distinguish between lawful military targets and civilians, who cannot be lawfully targeted. The principle of distinction proscribes the use of means or methods of war that “cannot be directed at a specific military objective” and are therefore indiscriminate.[56] The international humanitarian law principle of proportionality prohibits attacks in which expected harm to civilians and civilian objects “would be excessive” compared to the anticipated military advantage.[57]
Some of the limitations of autonomous weapons systems that raise concerns in the law enforcement context, such as the lack of human emotion and emotional intelligence and difficulties parsing human context and behavior, could also interfere with the systems’ compliance with international humanitarian law during armed conflict. First, there are serious doubts about whether autonomous weapons systems could distinguish adequately between lawful and unlawful targets. Enemy combatants in contemporary conflicts may shed visible signs of military status, such as uniforms, which makes recognizing their intentions crucial to differentiating them from civilians.[58] Autonomous weapons systems would lack the ability to accurately recognize and interpret subtle behavioral clues to human intentions because they would not possess the emotions and contextual and cultural information essential for these tasks.[59]
Decision support systems (DSSs) that use sensor processing and, in some cases, AI to help militaries inform targeting decisions have already been developed and are increasingly used in armed conflicts.[60] These tools, which rely on the same processes that an automated weapon system would use to carry out the comparable tasks, highlight multiple hurdles technology face in distinguishing between combatants and civilians.[61] For example, tools that depend on mobile phone data to locate and identify targets are unreliable. Cell tower triangulation data does not provide precise sites of mobile devices and cannot determine with certainty who has possession of them. Other existing digital tools use machine learning to look for characteristics within a population and use of certain known people or objects and treats them as proxies for other potential targets.[62]
The use of AI to inform military targeting decisions has, in some cases, risked violating international humanitarian law concerning distinction between military targets and civilians, and the need to take all feasible precautions before an attack to minimize civilian harm. This decision support system technology reflects the biases of its developers, is often built using incomplete data, and is technically incapable of showing how or on what data or basis targeting decisions are made, limiting the possibility of remedy or accountability.[63] Decision support systems are not autonomous weapons systems because they do not select and engage targets without human intervention. Nevertheless, the dangers they illuminate will only be exacerbated if there is no meaningful human control over the use of force.
Autonomous weapons systems could also face obstacles in making targeting choices that accord with the proportionality test, particularly given the chaos and complexity of the modern battlefield. These tasks “involve distinctively human judgement.”[64] As a result, while autonomous weapons systems are designed to react and adapt to changing environments, it seems unlikely that they could be designed to comply with a military's legal obligation to weigh civilian harm and military advantage in unanticipated situations.[65]
Although the ability of autonomous weapons systems to process complex information will improve in the future, it seems implausible that such systems could ever carry out the tasks necessary to fulfill the right to life as it applies during times of peace or war. The same problem threatens the principle of human dignity as described in Chapter IV.
III. Right to Peaceful Assembly
The right to peaceful assembly has a narrower reach than the right to life, but it also relates to the use of force. It is particularly applicable to law enforcement operations, although they may involve military forces during an occupation or states of emergency. The Universal Declaration of Human Rights introduced the right to peaceful assembly into international human rights law in 1948.[66] Article 21 of the ICCPR codified it.[67] In its General Comment No. 37 of 2020, the Human Rights Committee refers to peaceful assembly as the “very foundation of a system of participatory governance based on democracy, human rights, the rule of law and pluralism.”[68] Examples of assemblies include meetings, strikes, demonstrations, processions, rallies, and sit-ins.[69] Individuals rely on their right to assemble to voice grievances and influence public policy,[70] as well as to promote the advancement of other human rights.[71] The right is enshrined regionally in treaties from Africa, the Americas, and Europe,[72] and, as of 2020, was included in the constitutions of 184 of the 193 UN member states.[73]
A state can only restrict the right to assembly for certain reasons enumerated in Article 21 of the ICCPR, namely “national security or public safety, public order (ordre public), the protection of public health or morals, or the protections of the rights and freedoms of others.”[74] General Comment No. 37 emphasizes that these exceptions should be understood narrowly and guided by the objective of facilitating peaceful assembly.[75] Article 20 of the ICCPR, which special rapporteurs have applied to the right to peaceful assembly, for example, permits restrictions of an assembly specifically if it incites discrimination, hostility, or violence.[76] Where restrictions are necessary, authorities should first seek to use the least intrusive measures.[77]
The impact of autonomous weapons systems on individuals’ right to assembly raises serious concerns. A February 2016 report by the special rapporteurs Maina Kiai and Christof Heyns found that, in the context of assembly, “[a]utonomous weapons systems that require no meaningful human control should be prohibited.”[78] The Human Rights Committee, in its General Comment No. 37, stated: “Fully autonomous weapons systems, where lethal force can be used against assembly participants without meaningful human intervention once a system has been deployed, must never be used for law enforcement during an assembly.”[79]
Law Enforcement and Assembly
The duties of law enforcement officers include regulating assemblies to ensure they remain peaceful and dispersing them if they become unlawful. The potential for autonomous weapons systems to assume either, or both, of these roles poses a serious threat to human rights.
Regulation of Peaceful Assembly
Under the right to a peaceful assembly, states have a positive obligation to protect peaceful protesters from abuse by other members of the public, counter-demonstrators, and private security providers.[80] As a result, law enforcement officers are responsible for ensuring a non-violent assembly takes place, while minimizing the potential for injury to individuals and property damage.[81]
The force used to regulate assemblies should comply with the 1990 Basic Principles and the 1979 Code of Conduct.[82] As discussed in Chapter II on the right to life, these rules set out that any use of force be strictly necessary and proportionate to the intended objective. Lethal force may only be used as a last resort when there is an imminent threat to life.
Assemblies are complex, dynamic environments requiring difficult judgment calls about whether violence (or a threat of violence) exists and, if so, how to respond. As the Human Rights Committee has emphasized, there is “not always a clear dividing line” between peaceful and violent assembly.[83] An assembly can disrupt vehicular and pedestrian movement, and may include “pushing and shoving” without being violent.[84] Moreover, demonstrations may include both peaceful and violent protesters, or a peaceful group may face violent counter-protesters.[85] As explained in General Comment No. 37, “violence against participants … does not render the assembly non-peaceful.”[86] In addition, members of an assembly may carry items that appear dangerous as part of their expressive message, such as gas masks, helmets, or mock weapons.[87] Peaceful participants may seem threatening if they wear face coverings or hoods, either expressively or to protect their privacy.[88]
Autonomous weapons systems would be an inappropriate tool to regulate such situations. First, determining the necessity and proportionality of a response to an assembly requires careful judgment, which is an inherently human characteristic. Assessing whether an assembly is violent or likely to become so involves consideration and assessment of a wide variety of variables, including, but not limited to, the socio-political history of the demonstration, the often-subtle mannerisms of the assembly participants, the reputation of those participants, and the general environment and atmosphere. An autonomous weapon system could not be pre-programmed or trained to evaluate all possible variables and outcomes, particularly as the analysis must be done in a rapidly changing, case-by-case, and highly nuanced environment.[89] Heyns, discussing autonomous weapon system use during an assembly, further doubted the extent to which “autonomous weapons systems will have the capacity to determine the level of force, including lethal force, permissible in a particular context, especially given the limitations of the systems in terms of understanding human intentions and the subtleties of human behaviour.”[90] There are thus serious questions about whether autonomous weapons systems could determine if it was necessary to react to an assembly with force and how to apply force proportionally.
Second, it seems unlikely that autonomous weapons systems would be able to accurately differentiate among specific violent and non-violent individuals during an assembly, which is often crowded and chaotic. As emphasized by the Inter-American Court of Human Rights, authorities “must spare no effort to distinguish between individuals who are violent or potentially violent, and peaceful protesters” because “acts of sporadic violence or offences by some should not be attributed to others whose intentions and behaviour remain peaceful in nature.”[91] Despite developments in violence detection in video systems, an autonomous weapon system could confuse violent counter-demonstrators with non-violent individuals.[92] For example, distinguishing between an individual who initiated a hostile act and one that reacted in self-defense requires careful judgment and the assessment of a multitude of factors, which, as discussed above, are beyond the capacity of an autonomous weapon system. Moreover, as discussed in the context of non-discrimination, individuals from minority racial groups, or with a disability, might be seen as threatening, given the inherent bias of an autonomous weapon system relying on AI.
Finally, even if autonomous weapons systems passed the above tests, autonomous weapons systems might be unable to ensure they respond to violent assemblies with lethal force only when it is a last resort. Law enforcement officers can establish a dialogue with the parties involved in the assembly to relieve tension and resolve disputes.[93] Humans have the capacity, even in difficult and stressful situations, to relate to one another and establish communication to de-escalate. By contrast, autonomous weapons systems would be unable to carry out the tasks of communication, mediation, and conflict resolution that are necessary to minimize use of force at a protest or other assembly.[94]
Dispersing Assemblies
An assembly can only legally be dispersed in “exceptional cases.”[95] The 1990 UN Basic Principles recognize two broad situations in which dispersal is permissible. The first is when an assembly is no longer peaceful, or there is clear evidence of an imminent threat of serious violence that cannot reasonably be addressed by more proportionate measures.[96] The second is when the assembly is peaceful, but unlawful, and no suitable alternative dispersal measures are available. Under the latter principle, a law enforcement officer may disperse an assembly if it contravenes Article 20 of the ICCPR, inciting violence, blocking access to essential services, or imposing a “serious and sustained” interference with traffic or the economy.[97]
The 1990 Basic Principles lay out strict guidelines for dispersing assemblies. When an assembly is violent, “law enforcement officials may use firearms only when less dangerous means are not practicable and only to the minimum extent necessary.” The Basic Principles limit firearm use to situations of “self-defence or defence of others against the imminent threat of death or serious injury, to prevent the perpetration of a particularly serious crime involving grave threat to life, to arrest a person presenting such a danger or resisting their authority, or to prevent his or her escape.” Thus, some force may be used to disperse a violent assembly; however, as the Human Rights Committee has noted, it is “never lawful to fire indiscriminately or to use firearms in automatic mode when policing an assembly.”[98] The Basic Principles also proscribe the use of firearms to disperse non-violent assemblies under any circumstances.[99]
Addressing the dangers of autonomous weapon systems, Kiai and Heyns have written that, during an assembly, law enforcement officials need to “remain personally in control of the actual delivery or release of force.”[100] Under this standard, autonomous weapons systems that apply force without meaningful human control would be of particular concern.
An autonomous weapon system would face serious challenges in accurately determining whether force is necessary to disperse an assembly and, if it is, what level of force is appropriate. Because different rules apply to peaceful and violent assemblies, it is vital to distinguish them before deciding whether to forcibly disperse protesters. An autonomous weapon system would be unable to measure or determine when a protest has reached the level of violence that justifies the use of force because it could not replicate human judgment. Even if an autonomous weapon system accurately determined that force was necessary for dispersal, it would be extremely difficult for an autonomous weapon system without meaningful human control to disperse a protest in a targeted and limited way.
The Chilling Effect on Assembly
Use or threat of use of autonomous weapons systems during assemblies also raises concerns about the potential “chilling effect” on the right to peaceful assembly.[101] If the systems fall into the hands of an abusive government, for example, they could be used to suppress dissent. Even if the systems were not used, their existence would likely deter civilians from protesting against the government or its policies. Peter Asaro, a philosopher of science, technology, and media and professor at the New School, writes:
Just as the fear of the police and military—without actual use of violent force—is often enough to keep the masses from protesting, so too could the threatened automated violence serve to keep tyrants in power, without having to actually deploy violence, or by using it only sparingly to demonstrate its potency.[102]
The surveillance and data collection inherent to these systems could also undermine many people’s willingness to protest. Kiai and Heyns stated that:
The act of recording participants may have a chilling effect on the exercise of rights, including freedom of assembly, association and expression. Recording peaceful assembly participants in a context and manner that intimidates or harasses is an impermissible interference to these rights.[103]
Capturing information about protesters could chill assembly both because it puts those individuals at risk of being targeted, and because it could be used as proxy data for determining the behavior or location of family members or neighbors. The use of surveillance drones to record and profile protesters is widespread, including in France, the Occupied Palestinian Territory, and Hong Kong.[104] Some autonomous weapons systems would likely have surveillance capabilities for selecting targets as well as the ability to use force on the basis of this surveillance. The development and training of autonomous weapons systems that use AI would require using large amounts of data. If that data came from surveillance or public data sources, it could infringe on the right to privacy, discussed more in Chapter VI.
Whatever the circumstances, anxiety about the potential use of autonomous weapons systems—characterized by lack of human oversight, incapacity for moral reasoning, absence of compassion, unpredictability, and potential for biased targeting—could discourage potential protesters from exercising their right to peaceful assembly.
Concerns about violations of the right to assembly and the associated chilling effect may apply to a narrower context than some of the other rights discussed in this report, but they highlight the importance of addressing the use of autonomous weapons systems even in situations when they are only or primarily used in law enforcement operations.
IV. Principle of Human Dignity
The concept of human dignity lies at the heart of international human rights law and underpins the right to life. The opening words of the Universal Declaration of Human Rights assert that “recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world.”[105] The ICCPR establishes the inextricable link between dignity and human rights, stating in its preamble that the rights it enumerates “derive from the inherent dignity of the human person.”[106] Regional treaties echo this position.[107]
Human dignity is also a central tenet of international humanitarian law, evidenced through the Martens Clause, which is included in numerous international humanitarian law treaties. The Martens Clause provides that, in situations not covered by existing international agreements, civilians and combatants are nonetheless protected under established custom, the principles of humanity, and the dictates of public conscience.[108] The “principles of humanity” obligate states to respect both human life and, more broadly, human dignity.[109]
The idea of human dignity is premised on the recognition that every human being has inherent worth that is both universal and inviolable.[110] Therefore, if an actor kills without taking into account the worth of the individual victim, the killing undermines the fundamental notion of human dignity. In this sense, determining whether human dignity is respected requires examining the process behind, rather than just the consequences of, the use of force.[111] This process must be carried out by an actor who understands the value of a human life and the significance of its loss. Human beings should be recognized as unique individuals and not reduced to objects.[112] As Christof Heyns stated:
A central thrust of the notion of human dignity is the idea that humans should not be treated as something similar to an object that simply has an instrumental value (as is the case e.g. with slavery or rape) or no value at all (as with many massacres).[113]
With regard to autonomous weapons systems, the dignity critique is not focused on the systems generating the wrong outcomes. Even if autonomous weapons systems could feasibly make no errors in outcomes—something that is extremely unlikely—the human dignity concerns remain, necessitating prohibitions and regulations of such systems.
There are serious doubts about whether autonomous weapons systems could carry out military tasks in a way that reflects respect for human dignity for at least three reasons. Autonomous weapon systems cannot be programmed to give value to human life, do not possess emotions like compassion that can generate restraint to violence, and would rely on processes that dehumanize individuals by making life-and-death decisions based on software and data points.
Inability to Recognize the Value of Human Life
Autonomous weapons systems, “computational systems” that are not moral agents, could not be programmed to assign value to the inherent worth of human life or the significance of its loss.[114] Such robots would not engage in a deliberative process that involves human judgment on matters of life-and-death. Asaro writes, “human beings, who have experienced loss and are ourselves mortal, … have access to the qualitative value of human life.”[115]
Robots also would not face the sometimes-anguished moral choice of humans who internalize the responsibility of taking another person’s life and carry a psychological burden. Autonomous weapons systems are machines for which the determination to kill is nothing more than the inevitable result of their programming.
In his 2013 report to the UN Human Rights Council, Heyns, explained:
[A] human being somewhere has to take the decision to initiate lethal force and as a result internalize (or assume responsibility for) the cost of each life lost in hostilities, as part of a deliberative process of human interaction…. Delegating this process dehumanizes armed conflict even further and precludes a moment of deliberation in those cases where it may be feasible. Machines lack morality and mortality, and should as a result not have life and death powers over humans.[116]
Allowing a robot to kill disrespects and demeans the person who is targeted.
To respect human life, taking it must remain a human affair; it should not be delegated to autonomous weapons systems that, however complex their programming, cannot be programmed to value human life. Allowing autonomous weapons systems to take human life would gravely violate human dignity.[117]
Lack of Empathy and Compassion
As machines, autonomous weapons systems would not possess the human qualities of empathy and compassion that motivate humans to avoid taking life. Empathy and the compassion for others that it engenders come naturally to human beings. Most humans have experienced physical or psychological pain, which in turn drives them to try not to inflict unnecessary suffering on others. Their feelings transcend national, political, religious, racial, and other divides. As the ICRC notes, “feelings and gestures of solidarity, compassion, and selflessness are to be found in all cultures.”[118] People’s shared understanding of pain and suffering leads them to show compassion towards fellow human beings and inspires reciprocity that is “perfectly natural.”[119]
The logical result of human empathy and compassion is a fundamental resistance to killing. A retired US Army Ranger who conducted extensive research on killing during armed conflict found that “there is within most men an intense resistance to killing their fellow man. A resistance so strong that, in many circumstances, soldiers on the battlefield will die before they can overcome it.”[120] Armin Krishnan, a security studies professor at East Carolina University, similarly writes: “One of the greatest restraints for the cruelty in war has always been the natural inhibition of humans not to kill or hurt fellow human beings.”[121] Human understanding of their own and others’ pain leads them to go through great efforts to preserve rather than take life.
Autonomous weapons systems could never know physical or psychological suffering, nor understand the suffering of others. Amanda Sharkey, a professor of computer science, has written that “robots, lacking living bodies, cannot feel pain, or even care about themselves, let alone extend that concern to others. How can they empathize with a human’s pain or distress if they are unable to experience either emotion?”[122] It also seems unlikely that such characteristics could ever be approximated or replicated through programming and training.[123] An autonomous weapon system that killed a human would thus do so without any consideration for, or connection with, individuals’ identity as human beings, and without recognition of the weight of their suffering and death. The consequence is that autonomous weapons systems would lack the instinctual human resistance to killing that can protect human life beyond the minimum requirements of the law.[124]
Dehumanization
The ability of autonomous weapons systems to select and engage targets would rely on digital systems that were programmed and, in the case of AI-based systems, trained to carry out these tasks using different types of data. This process would mean that the developers of an autonomous weapon system would, at an earlier time when the life-or-death determination is not imminent, reduce potential human targets to data points to be processed in the future, rather than treating them as real lives to be considered and respected at the time that killing occurs.[125] The unique value of each individual’s life is therefore not considered at the time of development or at the time of attack. The magnitude of data involved in training systems that rely on some types of AI compounds this effect. In his 2017 article, Heyns writes that:
The notion of dignity is directly challenged by the idea that people, including opponents in war, can become a casualty of the algorithmic calculations (literally reducing its targets to the 0’s and 1’s of the digital code) of an unthinking entity if we happen to be in its way. Where that occurs, one person's death is indistinguishable from that of so many others who happen to find themselves in the striking range of generic killing machines.[126]
Killing by autonomous weapons systems would be akin to “mechanical slaughter.” The choice to kill by a system that relies on AI would be founded on a pre-programmed set of instructions and training before deployment, rather than on a deliberate moral decision in the moment.[127] Instead, because each human life is distinctly valuable, human beings should be treated as unique, complex individuals rather than simply as data points in a broader source code.
Furthermore, when an autonomous weapon system determines that force is necessary, it does so based on an assessment of the surroundings that is incomplete and limited by its programming and training. Autonomous weapons systems are incapable of factoring into their operations and decision-making a human’s motivations, background, mannerisms, beliefs, and perspectives.[128] Such understanding is crucial in order to see that individual as a human deserving of inherent worth, value, and respect.
In a December 2023 address on “Artificial Intelligence and Peace,” the late Pope Francis recognized the threats to human dignity posed by autonomous weapons systems. He stated that such systems:
can never be morally responsible subjects. The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine, which as “intelligent” as it may be, remains a machine.[129]
Pope Francis concluded that these risks made “adequate, meaningful and consistent human oversight of weapon systems” imperative.[130]
Given that all human rights are premised on the dignity of every human, the multiple threats autonomous weapons systems pose to this cross-cutting principle strike at the foundation of this body of international law.
V. Principle of Non-Discrimination
Non-discrimination, a general principle of international human rights law, obligates states to enact laws, policies, and practices that ensure equal treatment.[131] It is essential for the protection and promotion of human rights for all people, irrespective of their race, color, sex, disability, or other status.
The principle of non-discrimination appears in numerous instruments of international law. It is enshrined in the UN Charter, which sets one purpose of the UN as the promotion of “respect for human rights and for fundamental freedoms of all without distinction as to race, sex, language or religion.”[132] The Universal Declaration of Human Rights states, “All are equal before the law and are entitled without any discrimination to equal protection of the law.”[133] The ICCPR obligates states parties to undertake “to respect and to ensure to all individuals within [their] territory and subject to [their] jurisdiction the rights recognized in the … Covenant, without any distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.”[134] The International Covenant on Economic, Social and Cultural Rights (ICESCR) contains a similar provision in Article 2.[135]
States remain bound by the duty not to discriminate at all times, including during periods of peace, public emergency, and armed conflict. ICCPR Article 4(1) stipulates that while states parties may take measures suspending (“derogating from”) some obligations under the ICCPR to the extent strictly required by situational exigencies, those measures must not “involve discrimination solely on the ground of race, colour, sex, language religion or social origin.”[136] The ICESCR does not contain a derogation clause, and the Committee on Economic, Social, and Cultural Rights has confirmed that the principle of non-discrimination applies even during times of armed conflict or public emergency.[137] Customary international law similarly recognizes the prohibition against racial discrimination as a non-derogable norm, which cannot be suspended under any circumstances.[138]
The principle is reflected in international humanitarian law as the prohibition of “adverse distinction,” which the ICRC describes as the international humanitarian law “equivalent” of the principle of non-discrimination.[139] Customary international humanitarian law recognizes the prohibition of adverse distinction as applicable in both international and non-international armed conflicts.[140] It prohibits adverse distinction “based on race, colour, sex, language, religion or belief, political or other opinion, national or social origin, wealth, birth or other status, or on any other similar criteria.”[141] It obligates parties to an armed conflict “to treat persons without distinctions of any kind save those based on the urgency of their needs.”[142] Law enforcement and armed conflict exigencies may therefore not be used to justify discriminatory policies or the unequal application of the law.
Overarching Concerns
Weapons systems that use AI to select and engage targets could present obstacles to state adherence to the principle of non-discrimination. Unlike passive machines capable only of mechanical or predetermined responses, machines using AI are designed to autonomously collect information from a variety of sources, analyze the material, and act on the derived insights.[143] Autonomous weapons systems equipped with machine learning systems are programmed to “learn” from input data in live environments in real time, limiting the predictability of their actions and potentially reinforcing system biases over time.[144]
The biases of human developers and engineers may influence a system’s design and manifest in the system’s decision-making processes. Humans can also train an autonomous weapon system on a non-representative data set, further warping its decision-making and entrenching historical biases. The limited opportunity for humans to flag risks or understand, intercede, and correct system errors can make matters worse, particularly with machine learning models, which enable a system to train itself to find patterns or make predictions.
Such system flaws can lead to algorithmic bias, which refers to the “systemic under- or over-prediction of probabilities for a specific population.”[145] Independent studies have found, for example, gender and skin-type bias in commercial AI systems.[146] These AI technologies can “reproduce and exacerbate existing patterns of discrimination, marginalization, social inequalities, stereotypes and bias—with unpredictable outcomes.”[147]
AI-based tools that are currently in use to inform military targeting decisions present similar issues. They are prone to problems inherent to AI systems, including issues with data accuracy, misidentification, bias, and specificity, making them inappropriate tools to guide decisions about the use of force.[148] While these tools are not autonomous weapons systems, they rely on the same technology and processes that an autonomous weapon system would to carry out tasks of target identification and selection.
Unequal rates of misidentification and error are particularly grave when they involve autonomous weapons systems that use AI to make life-and-death determinations. Groups of people who are already vulnerable to misidentification may bear a disparate risk of error—a risk with even more serious consequences in the context of autonomous weapons systems that can apply force as well as select targets. Erroneous and unpredictable lethal outcomes jeopardize both individual lives and the principle of non-discrimination.
Autonomous weapons systems would raise additional problems when they operate without meaningful human control. While autonomous weapons systems that relied on AI could reflect the biases of their human developer, they would also lack a human operator who may be able to recognize that bias and override it. Instead, the systems could continue to reproduce or expand an error based on new biased data inputs.[149]
Specific Forms of Discrimination
Dedicated treaties focus on different forms of discrimination. Autonomous weapons systems raise concerns under these instruments, including those related to race, sex, gender, and ability.
Racial Discrimination
The prohibition on racial discrimination, among others, is articulated in the UN Charter and the ICCPR. More specifically, the International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) requires states to eliminate “all forms” of racial discrimination and to guarantee the right of everyone to equality before the law, including the “right to security of person and protection by the State against violence or bodily harm, whether inflicted by government officials or by any individual group or institution.”[150] The prohibition on racial discrimination is recognized as having acquired the status of a non-derogable norm of customary international law.[151]
Autonomous weapons systems can reflect the same discriminatory challenges found in AI systems used in other contexts. For example, people of color are particularly prone to misidentification by AI systems. Several studies have shown that facial recognition software has not worked as effectively with people who have dark skin.[152] Studies have also found that some automatic speech recognition systems function “much more poorly” on the speech of Black people.[153] Individuals with intersectional identities may be especially vulnerable. Researchers examining three commercial AI systems found, for example, that the programs’ error rates in determining the gender of light-skinned men were never worse than 0.8 percent, yet the programs’ error rates in determining the gender of darker-skinned women were more than 20 percent in one case and more than 34 percent in the other two.[154] It follows that, compared to other groups, people of color would be unequally prone to misidentification by an autonomous weapon system equipped with AI and thus would be put at heightened risk of being erroneously engaged with force. That disparate impact would violate their rights to equal security of person and protection by the state against violence.
In addition, the data set that autonomous weapons systems would rely upon might be skewed. Historical biases in the training data supplied by human programmers at the time of collection can be built on and amplified as the AI learns from this data.[155] As the UN special rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, E. Tendayi Achiume, wrote in her June 2020 report, emerging digital technologies raise concerns surrounding the “use of and reliance on predictive models that incorporate historical data—data often reflecting discriminatory biases and inaccurate profiling—including in contexts such as law enforcement, national security and immigration.”[156]
The Working Group of Experts on People of African Descent also examined discrimination by algorithms against people of color in the context of policing as well as homeownership, employment, and education, in its 2019 Report to the UN Human Rights Council.[157] The Working Group noted that, in this regard, “data systems and algorithms often incorporate, mask, and perpetuate racism in their design and operation.”[158]
The overrepresentation of certain groups in combatant or criminal databases might bias an autonomous weapon system equipped with AI to disproportionately target people of color, analogous to the disproportionate targeting of people of color by predictive policing models.[159]
A legal expert who participated in the research and drafting of the first UN Human Rights Council report on autonomous weapons systems has highlighted the threat of racial discrimination from autonomous weapons systems specifically in the context of law enforcement operations. According to Thompson Chengeta, international law professor and expert in autonomous weapons systems, engaging the possible racial implications of autonomous weapons systems requires “identify[ing] the potential victims and perpetrators.”[160] Since users of autonomous weapons systems will frequently be states that already disproportionately use lethal force against communities of color, autonomous weapons systems are “likely to be used, in the context of law enforcement and counterterrorism, in situations where it is often the rights of people of color and civilians in Muslim communities that are violated.”[161] There are also ethical issues raised by training autonomous weapon systems on vulnerable populations in conflict settings and repurposing civilian training data from peacetime to wartime use.
Sex and Gender Discrimination
Article 3 of the ICCPR obligates states to ensure equal rights on the basis of sex.[162] The purpose of the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), as its name indicates, is to prohibit discrimination specifically against women.[163]
As is the case with people of color, the risk of misidentification by autonomous weapons systems could be heightened for women as compared to men. Research has found that general facial recognition systems, for example, are typically less effective for women.[164] Voice recognition systems also perform poorer for women.[165] One study of several facial analysis technologies revealed that facial recognition software is more likely to misclassify transgender and non-binary people, heightening the risk of misidentification.[166]
In other cases, autonomous weapons systems could pose a heightened risk for young men due to algorithmic bias from training data. For example, autonomous weapons systems that use AI could be trained on data predominantly consisting of historical military records, in which men have occupied combat roles, creating a bias toward identifying them as combatants. Furthermore, in certain contexts, such autonomous weapons systems might encounter more male combatants, register them as inputs, and “learn” to identify more men as potential targets.[167] A system may thus be more likely to misidentify a male civilian as a legitimate target. Consequently, biases in the training data and context could lead to disparities in misidentification between men and women, violating the principle of non-discrimination. While biases exist in human operators, those operators may learn to check their biases and may be held accountable for their biases in ways that an autonomous weapon system cannot.
Disability Discrimination
Discrimination on the basis of one’s ability is specifically prohibited by the Convention on the Rights of Persons with Disabilities (CRPD), which imposes a positive obligation on states to protect persons with disabilities from exploitation, violence, and abuse.[168] The CRPD explicitly applies during armed conflict and other emergencies and obligates states to take “all necessary measures to ensure the protection and safety of persons with disabilities in situations of risk.”[169]
Persons with disabilities would face heightened risks from autonomous weapons systems. Disabilities are complex and wide-ranging. The nature of disabilities may vary over time, and certain disabilities arise only in certain environments or circumstances.[170] As such, disability is not easily sorted into a binary system suitable for assessment by a digital tool.[171] Moreover, it seems unlikely that an autonomous weapon system would be capable of operating in a way that addresses the specific impacts of disability on individuals and reacting accordingly.[172]
The potential for error in identification is likely to be increased for persons with certain disabilities.[173] Persons with disabilities such as Down syndrome or achondroplasia may have distinct facial features or proportions, use unexpected facial expressions, or move their faces in different ways.[174] The same may be true of people who are blind, as blindness can result in differences in eye anatomy and movement.[175] Thus, persons with disabilities risk being misidentified—or not identified at all—by an autonomous weapon system. An autonomous weapon system could fail to detect the presence of a civilian or misidentify an individual as military target, leading not only to an unlawful use of force but also to a discriminatory error.
Similar problems arise in the context of body recognition. An autonomous weapon system could have software that is able to analyze gestures and gait or predict an individual’s movement path.[176] People with restricted movement, such as those with ALS or quadriplegia, or those with posture differences, such as individuals with cerebral palsy or Parkinson’s, could be identified or categorized differently by an autonomous weapon system.[177] Persons with psychosocial disabilities may not visually appear to have a disability, but could exhibit unexpected behavior—such as moving in a fashion that may be misidentified as threatening.[178]
Additionally, devices used by persons with disabilities could be erroneously assessed by an autonomous weapon system. Assistive devices may be seen as a threat, particularly when they resemble a weapon. An autonomous weapon system could confuse a cane with a gun, especially if the cane had a shiny or metallic appearance and a long, thin shape, and was held close to the body.[179] The weakness of an autonomous weapon system at analyzing context and the risk that its machine learning could reinforce errors exacerbate the dangers. Such problems have already been identified in studies of AI systems. One scholar, testing an AI model that could be used by an autonomous weapon system, found that the model would have used force against someone traveling backwards in a wheelchair because the model expected them to be moving forward.[180] Notably, in this test, this problem could not be fixed by adding more data, as the model, which had machine learning, simply became more convinced that the wheelchair should be moving forward.[181]
Finally, before determining whether to categorize an individual as a target, certain autonomous weapons systems could issue commands that persons with disabilities may not be able adequately to comply with. A system might require an individual to verbally identify themselves. A person with a hearing disability, speech difficulty, or intellectual disability may struggle to produce an “appropriate” response.[182] A similar problem arises with a blind person, who will not be able to respond to visual cues or warnings given by the system. A human operator, capable of nuanced judgment, might be more capable of identifying or reasoning through such issues than an autonomous weapon system that relies on software built with data sets predominantly featuring able-bodied individuals.
The adverse effects of AI in other forums illustrate the high risk that autonomous weapons systems will discriminate against people based on their race, sex, gender, or ability, but this list is not exhaustive. Other groups protected by human rights law may also face unlawful discrimination from the use of autonomous weapons systems. While gathering more data might reduce the potential for error, it could require increased surveillance of marginalized people. As will be discussed in the next chapter, such surveillance would create different discrimination problems as well as infringe on the right to privacy.
VI. Right to Privacy
The right to privacy, which is more relevant to the development of autonomous technology than its use of force, is recognized in numerous international declarations and treaties.[183] The Universal Declaration of Human Rights enshrines the right to privacy in Article 12, which states that “[n]o one shall be subjected to arbitrary or unlawful interference with his privacy.”[184] It adds that “[e]veryone has the right to the protection of the law against such interference or attacks.”[185] The ICCPR includes identical language regarding the right to privacy in Article 17.[186] The importance of the right to privacy has been consistently reaffirmed by international bodies, including the UN Human Rights Council, which has recognized the right as central to the “free development of an individual’s personality and identity, and an individual’s ability to participate in political, economic, social and cultural life.”[187]
States can legitimately restrict the right to privacy only in a “small number” of “very special” occasions.[188] In those situations, interference cannot be “unlawful” or “arbitrary.”[189] International human rights bodies have clarified the meaning of these terms through various comments, reports, and publications. The term “unlawful” ensures that no interference can take place except in cases envisaged, and clearly outlined, by domestic law, and this law must comply with the ICCPR.[190] The addition of the term “arbitrary” seeks to guarantee that, even if provided for in domestic law, the interference is compatible with the provisions, aims, and objectives of the ICCPR and reasonable in the particular circumstance.[191] In a 2014 report to the UN Human Rights Council, the UN high commissioner for human rights explained the test for reasonableness is whether the interference was necessary and proportionate.[192]
The high commissioner’s report elaborated upon the meaning of “necessary” and “proportionate” in the context of the right to privacy. The report explained that any restrictions must be “necessary for reaching a legitimate aim, as well as in proportion to the aim and the least intrusive option available.”[193] The “legitimate aim” can include, for example, “protecting national security or the right to life of others,” provided that the limitation has “some chance of achieving that goal.”[194] The authorities imposing the restriction bear the burden of demonstrating it will advance a legitimate aim.[195] The report added that “any limitation to the right to privacy must not render the essence of the right meaningless and must be consistent with other human rights, including the prohibition of discrimination.”[196]
Mass Surveillance
The development and use of autonomous weapons systems raise concerns under the right to privacy. As UN General Assembly Resolution 75/176 on the Right to Privacy in the Digital Age recognizes, “the conception, design, use, deployment and further development of new and emerging technologies, such as those that involve artificial intelligence, may have an impact on the enjoyment of the right to privacy and other human rights.”[197] Autonomous weapons systems are problematic because their training, testing, and use would require systematic and often non-consensual collection of data, especially through mass surveillance.[198] In addition, digital technology enables far-reaching extraterritorial surveillance, and governments are obligated to respect the privacy rights of individuals, regardless of their nationality or location. Such data collection, needed to ensure that weapons systems with AI are functional and effective, would amount to an arbitrary interference with individuals’ privacy in violation of human rights law.
The development of non-weaponized AI systems illustrates the extent of mass data collection involved, typically without the consent or compensation of the original user, creator, or owner of the data.[199] AI systems and tools have an “insatiable appetite for extensive personal data.”[200] As a Google AI researcher has noted, “[b]ecause these datasets can be large … and pull from a range of sources, they can sometimes contain sensitive data, including personally identifiable information—names, phone numbers, addresses, etc., even if trained on public data.”[201]
Mass surveillance is the most likely source of data for autonomous weapons systems that select and engage targets based on AI. The International Principles on the Application of Human Rights to Communications Surveillance, adopted by more than 400 organizations by May 2014, describes “communications surveillance” as the collection, use, and retention of information related to an individual’s communications transmitted through electronic mediums.[202] The principles’ preamble warns that such surveillance can undermine safeguards for “protected information.” It can reveal details about a person that are not “easily accessible to the general public.”[203]
Because it entails the collection, storage, and use of sensitive personal data, mass surveillance infringes on the right to privacy in numerous problematic ways. For example, the data collected is repurposed without the data subjects’ consent. Individuals can never rescind consent for use of their data, if they are even aware of it, because training data shapes, and is permanently and perpetually captured, by algorithmic models. Individuals whose information has been included in training data can be identified in some models through reverse engineering.[204] These problems interfere with an individual’s freedom to “participate in political, economic, social and cultural life,” which as noted above, the right to privacy seeks to protect.[205]
Although they are not autonomous weapons systems, military decision support systems demonstrate the emerging threat that mass surveillance poses in the military use of AI. To allow the systems to identify potential targets, states or companies developing these tools need to gather vast amounts of data in times of both peace and war that they use to train artificial intelligence-based tools for eventual use in armed conflict. The data involved can be gathered through, for example: biometric surveillance technologies, such as face, voice, or gait recognition[206]: tools that gain access to a cell phone’s location, camera, microphone, and text messages[207]: or other types of statistical or personal data that have been repurposed for military uses.[208] This process constitutes an interference with the right to privacy. Furthermore, as discussed in Chapter II, the data produces inaccurate and indiscriminate results and relying on it could violate international humanitarian law and the human right to life.
The reliance on mass surveillance in weapons systems that use AI to select and engage targets poses an even more serious threat to human rights and humanity. The development and use of autonomous weapons systems would interfere with the right to privacy. Given the inherent lack of transparency in software and processes involving AI, however, the ways that personal data would be used and how personal data could contribute to life-or-death decisions would occur in a black box. The data collector as well as the data target would lose control over the sensitive information, and the obfuscation of which data are used and when, would mean that this data could be endlessly reused.
Arbitrary Interference
While human rights law permits interference with privacy under very limited circumstances, the data collection practices associated with the development and use of autonomous weapons systems would not fall under those exceptions. They are likely to be arbitrary as they would be unnecessary and disproportionate.
Collecting the data to develop autonomous weapons systems would not be “necessary for reaching a legitimate aim.”[209] When determining if an act is necessary, the European Court of Human Rights considers whether the government implemented the “least onerous” policy measure available, or whether it had alternatives that would have been less restrictive of privacy rights while being equally effective at achieving the same policy goals.[210] Data collection to develop autonomous weapons systems would be an onerous policy measure. It would be a technically complex process that threatens privacy. Goals of protecting national security and public order could likely be advanced by means less harmful than developing autonomous weapons systems.
Regardless, given the wide range of problems posed by autonomous weapons systems, there are serious questions about whether developing them is a legitimate aim. The Human Rights Committee has specified that data collection should “never be used for purposes incompatible with the [ICCPR].”[211] In this case, however, data collection would be used to develop a weapon system that infringed on numerous fundamental human rights, including the rights to life, assembly, and remedy, as well as the principles of dignity and non-discrimination. More specifically, the UN high commissioner for human rights said a “legitimate aim” could include protecting the right to life, but as discussed in Chapter II, the autonomous weapons systems pose a risk to the right to life during times of peace and armed conflict. Therefore, collecting data to develop such weapons or their component parts or software seems incompatible with international human rights law’s test on this matter.
The data collection process would also be disproportionate.[212] States have an obligation to use an option that is “proportional to the end sought”[213] and “the least intrusive option available.”[214] The 2021 report of the UN high commissioner for human rights on the right to privacy in the digital age specifies that any “blanket, indiscriminate retention” of data “would fail the proportionality test.”[215] Because autonomous weapons systems would likely also involve real-time data collection and surveillance of targets, they would need to retain collected data in order to compare data and make determinations on targets. Given the size and indiscriminate nature of the databases, the risks of engaging in such data collection and preservation would exceed any military benefits.
Past application of the right to privacy is consistent with this analysis. For example, the UN high commissioner for human rights has deemed the use of data from third-party sources, such as social media or phone companies, in governmental initiatives as “neither necessary nor proportionate,” and thus an arbitrary interference of privacy.[216] The data collection needed for autonomous weapons systems would likely similarly lack use limitations and transparency, which points to arbitrariness as well.[217] The threats that autonomous weapons systems pose thus undermine the right to privacy.
VII. Right to a Remedy
The right to a remedy, which comes into play at the end of an autonomous weapon system’s lifecycle, strengthens international human rights law by creating a framework for imposing responsibility for violations of all rights. The Universal Declaration of Human Rights lays out the right, and Article 2(3) of the ICCPR obligates states to “ensure that any person whose rights or freedoms … are violated shall have an effective remedy.”[218] Several regional human rights treaties incorporate it as well.[219]
The right to a remedy obligates states to ensure individual accountability. It includes the duty to prosecute individuals for serious violations of human rights law. In its General Comment No. 31, the Human Rights Committee explains that the ICCPR mandates states to investigate allegations of wrongdoing and, if they find evidence of certain types of violations, to bring perpetrators to justice.[220] A failure to investigate and, where appropriate, prosecute “could in and of itself give rise to a separate breach of the Covenant.”[221] The 2005 Basic Principles and Guidelines on the Right to a Remedy and Reparation (2005 Basic Principles and Guidelines), standards adopted by the UN General Assembly, reiterate the obligation to investigate and prosecute. They also require states to punish individuals who are found guilty.[222]
The duty to prosecute applies to acts committed in law enforcement situations or armed conflict. The 2005 Basic Principles and Guidelines require states to prosecute gross violations of international human rights law, and the Human Rights Committee includes arbitrary killings, which contravene the right to life, among those crimes.[223] The 2005 Basic Principles and Guidelines also cover “serious violations of international humanitarian law,” the lex specialis for armed conflict.[224] The Fourth Geneva Convention and its Additional Protocol I, international humanitarian law’s key civilian protection instruments, similarly oblige states to prosecute “grave breaches,” i.e., war crimes, such as intentionally targeting civilians or knowingly launching an attack that would disproportionately harm civilians.[225]
The right to a remedy is not limited to criminal prosecution. It also gives states the responsibility to provide reparations, which can include “restitution, compensation, rehabilitation, satisfaction and guarantees of non-repetition.”[226] States have the responsibility to provide many of these reparations. The 2005 Basic Principles and Guidelines further obligate states to enforce judgments related to claims brought by victims against individuals or entities.[227] The Basic Principles and Guidelines state that they “are without prejudice to the right to a remedy and reparation for victims of all violations of international human rights law,”[228] which suggests that victims are entitled to some form of remedy, even if the violations do not rise to the level of a crime and require prosecution.
Accountability and Autonomous Weapons Systems
Accountability serves a dual purpose. First, it seeks to deter future violations of the law.[229] Second, a remedy serves as retribution, which provides victims the satisfaction that someone was punished for the harm they suffered.[230] Applying these principles to the context of autonomous weapons systems, Heyns stated that if responsibility for a robot’s actions is impossible, “its use should be considered unethical and unlawful as an abhorrent weapon.”[231]
In both law enforcement operations and armed conflict, the actions of autonomous weapons systems would likely fall within an accountability gap that would contravene the right to a remedy. The Human Rights Committee has raised concerns around legal responsibility for arbitrary deprivation of life caused by autonomous weapons systems.[232] It is unclear who would be liable when an autonomous machine operating without meaningful human control makes life-and-death determinations about the use of force. Assigning responsibility to the autonomous weapon system would make little sense because the system could not be punished like a human.[233] Punishment typically involves the loss of rights or privileges, often in a way that causes physical or psychological pain, experiences not applicable to a weapon system.
As discussed below, significant legal and practical obstacles exist to holding accountable the other most likely candidates: the operator, programmer, and manufacturer.[234] It will be difficult to foresee all situations that autonomous weapons systems might face, and thus to predict and account for their actions. In these cases, it is hard to see how one could achieve accountability under existing law in a way that was legally feasible and fair to any person who might be accused. The resulting gap in accountability would run counter to the right to remedy. It would undermine the policy goals of deterrence and retribution. It would leave victims frustrated that no one was held accountable and punished for their suffering.
Command Responsibility
If operators, programmers, or manufacturers used or created an autonomous weapon system with the clear intent to violate the right to life, they could likely be held criminally liable. Of greater concern would be a situation in which a such a system committed an arbitrary killing, but there was no evidence a human intended or foresaw it. In such cases, the doctrine of command, also known as superior, responsibility could provide guidance for considering liability.
Commanders or superior officers are generally not considered accountable for the actions of their subordinates because the latter make autonomous choices, as autonomous weapons systems would. The principle of command responsibility is the primary exception to this rule. According to the 1990 Basic Principles:
Governments and law enforcement agencies shall ensure that superior officers are held responsible if they know, or should have known, that law enforcement officials under their command are resorting, or have resorted, to the unlawful use of force and firearms, and they did not take all measures in their power to prevent, suppress or report such use.[235]
Command responsibility similarly holds military commanders responsible for subordinates’ actions if they knew or should have known their subordinates committed or were going to commit a crime and failed to prevent the crime or punish the subordinates.[236]
The doctrine of command responsibility is ill suited to establishing accountability for the actions of an autonomous weapon system. First, if the operator were viewed as the commander of the autonomous weapon system, it would be difficult for the operator to possess the requisite advance knowledge about whether an autonomous weapon system would commit an unlawful act. If the system were operating without meaningful human control, its actions would often be unforeseeable. The operator would therefore find it challenging to predict if a system would comply with or contravene international human rights or humanitarian law, bodies of law that frequently require case-by-case determinations.
Second, an operator who deployed an autonomous weapon system would face obstacles to preventing or punishing a system’s unlawful actions. The operator would be unable to prevent them if they could not foresee how the machine might act in different situations. The operator could not punish an autonomous weapon system after the fact since, as discussed above, a robot is not responsive to the traditional methods of human punishment. Unless all of these elements of command responsibility were met, the operator could not be held legally responsible for the actions of an autonomous weapon system.
Autonomous weapons systems that rely on AI to select and engage targets would present especially significant obstacles to accountability. For example, it is currently not possible to interrogate and document the steps leading to decisions generated by deep learning systems, sometimes referred to as “black boxes.” As a result, operators could not understand the workings of the autonomous weapon system or the data points that led to the outcome, and thus could not predict or prevent problematic or harmful outputs. The lack of explainability and transparency around high-stakes decisions would interfere with the right to remedy as well as with multiple other rights discussed in this report.
Civil Responsibility
Alternatively, law enforcers could try to hold a programmer or manufacturer responsible for the acts of an autonomous weapon system. Civil tort law offers an approach other than prosecution, but it too would likely fail to ensure the right to a remedy. In the United States, for example, defense contractors are generally not found liable for harm caused by their products. Under the Federal Tort Claims Act, the government waives its immunity from civil suits in certain situations. The US Supreme Court has applied this rule to contractors hired by the government. The waiver, however, is subject to the discretionary function exception and the combatant activities exception.[237] The first grants immunity for design defects in military equipment when:
(a) the United States approved reasonably precise specifications; (b) the equipment conformed to those specifications; and (c) the supplier warned the United States about dangers in the use of the equipment that were known to the supplier but not to the United States.[238]
The second, the combat activities exception, states that contractors have “no duty of reasonable care … to those against whom force is directed as a result of authorized military action” and that litigation should not lead to the disclosure of secret weapon designs.[239] The programming and manufacturing of an autonomous weapon system could fall under at least one of these exceptions, allowing the system’s creators to escape liability. These two exceptions apply only in the United States, but they are significant because the United States is a leader in the development of autonomous weapons systems. Like the limits of command responsibility, immunity under tort law could present an obstacle to holding individuals accountable for the actions of autonomous weapons systems.
Even without a legal gap, there are policy and practical problems with holding programmers and manufacturers accountable. While programmers and manufacturers should be encouraged to mitigate known risks, they would have limited control over the harm their autonomous weapons systems could cause in various situations and thus liability would often be unfair. Robert Sparrow, an expert in the ethics of new technologies, wrote that assigning them responsibility would be like “holding parents accountable for the actions of their children after they have left their care.”[240] Liability could also create a moral hazard whereby operators become more likely to deploy autonomous weapons systems than traditional weapons in dangerous situations because they believe programmers and manufacturers would bear any responsibility. Finally, civil suits are generally brought by victims, and it is unrealistic to think all victims would have the resources, adequate access to information (especially for a “black box” system), and the legal mechanisms to obtain justice.[241] This practical limitation is significant because civil litigation against those who program, manufacture, or use such robots would be a more likely avenue of redress than criminal prosecution.
Most of the human rights discussed in this report seek to prevent harm to people, but the right to a remedy aims to mitigate harm that has already occurred. Autonomous weapons systems deprive victims and their families of the opportunity to exercise this reparative right and alleviate their suffering due to the lack of accountability for operators, programmers, and manufacturers.
The concerns autonomous weapons systems raise under the right to remedy underscore that a new legally binding instrument should both prevent harm and avoid an accountability gap. The treaty’s prohibitions and regulations should also apply under all circumstances, ensure all people are protected in a dignified, non-discriminatory manner, and cover the whole lifecycle of the weapons systems. Only such a comprehensive instrument will effectively address the range of human rights challenges raised autonomous weapons systems.
Appendix: Chronology of Human Rights Documents on Autonomous Weapons Systems
This chronology shows how reports to United Nations bodies and general comments from various UN treaty bodies have examined autonomous weapons systems through the lens of human rights and in many cases called for a moratorium or prohibition. It also tracks UN General Assembly resolutions and multilateral meeting outcome documents that have indicated the interest of states in discussing the human rights implications of autonomous weapons systems and working for new international law.
Reports to UN Bodies
Since 2013, reports to UN bodies from the UN secretary-general, the high commissioner for human rights, and numerous special rapporteurs, who are independent experts appointed by the Human Rights Council to report and advise on human rights, have highlighted the serious threats autonomous weapons systems pose to specific human rights and to international human rights law in general. The reports have also served as a platform for the authors to call for legally binding prohibitions and regulations on this emerging technology.
2013
UN Human Rights Council, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns: Lethal Autonomous Robotics, A/HRC/23/47, April 9, 2013.
Elaborates, in the first in-depth report on autonomous weapons systems by a UN special rapporteur, concerns about compliance with international humanitarian law and international human rights law and calls for national moratoriums on such weapons systems until the establishment of an international framework.
2014
UN Human Rights Council, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns, A/HRC/26/36, April 1, 2014, paras. 142-145.
Stresses that autonomous weapons systems are not only a disarmament issue but also a human rights issue (particularly relevant to the rights to life and human dignity) and urges the Human Rights Council to remain engaged on the issue.
UN General Assembly, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns, A/69/265, August 6, 2014, paras. 84-87.
Addresses how autonomous weapons systems threaten the rights to life and human dignity in law enforcement operations as well as in situations of armed conflict and calls for a coherent response involving both human rights and disarmament bodies.
2016
UN Human Rights Council, Joint Report of the Special Rapporteur on the Rights to Freedom of Peaceful Assembly and of Association, Maina Kiai, and the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns, on the Proper Management of Assemblies, A/HRC/31/66, February 4, 2016, para. 67(f).
Calls for the prohibition of autonomous weapons systems that do not require meaningful human control and recommends that remotely controlled force should “only ever be used with the greatest caution” by law enforcement during assemblies.
UN General Assembly, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Christof Heyns, A/71/372, September 2, 2016, paras. 75-83.
Calls for a ban on autonomous weapons systems without meaningful human control over their critical functions because they would violate the rights to life and dignity and lack accountability.
2017
UN Human Rights Council, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Agnes Callamard, on a Gender-Sensitive Approach to Arbitrary Killings, A/HRC/35/23, June 6, 2017, para. 48.
Notes that autonomous weapons systems could “reinforce stereotypes of violent masculinities.”
2020
UN Human Rights Council, Impact of New Technologies on the Promotion and Protection of Human Rights in the Context of Assemblies, Including Peaceful Protests: Report of the UN High Commissioner for Human Rights, Michelle Bachelet, A/HRC/44/24, June 24, 2020, para. 45.
Reiterates the recommendation of special rapporteurs that “[f]ully autonomous weapons systems that employ lethal or less-lethal force without meaningful human intervention once deployed should never be used for law enforcement during an assembly.”
UN General Assembly, Report of the Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, E. Tendayi Achiume, A/75/590, November 10, 2020, para. 14.
Urges states to “account for and combat the disproportionate racial, ethnic, and national origin impacts that fully autonomous weapons would have on vulnerable groups, especially refugees, migrants, asylum seekers, stateless persons and related groups,” including in the context of the weapons systems’ potential deployment on national borders.
2021
UN General Assembly, Report of the Special Rapporteur on the Rights of Persons with Disabilities, Gerard Quinn, A/76/146, July 19, 2021, para. 29.
Finds that the use of autonomous weapons systems would “exponentially compound” the difficulties of armed conflict for persons with disabilities, particularly by degrading and destroying essential services and support systems and causing displacement.
UN Human Rights Council, The Right to Privacy in the Digital Age: Report of the UN High Commissioner for Human Rights, Michelle Bachelet, A/HRC/48/31, September 13, 2021.
Reiterates the UN secretary-general’s call for a global prohibition on lethal autonomous weapons systems, examines the impacts of AI technologies more broadly on the right to privacy and other human rights, and calls for a ban on AI applications that cannot operate in compliance with international human rights law.
UN Human Rights Council, Racial and Xenophobic Discrimination and the Use of Digital Technologies in Border and Immigration Enforcement: Report of Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, E. Tendayi Achiume, A/HRC/48/76, December 17, 2021, para. 16.
Reiterates her 2020 call for states to “account for and combat” the disproportionate impacts fully autonomous weapons would have on vulnerable groups, noting that an increase in the use of autonomous technologies along borders combined with new anti-immigration policies exacerbate the risks.
UN Human Rights Council, Report of the Special Rapporteur on the Rights of Persons with Disabilities, Gerard Quinn, A/HRC/49/52, December 28, 2021, para. 54.
Expresses concern that fully autonomous weapons would lack the ability to assess whether the assistive devices, facial expressions, or emotional reactions of a person with disabilities make that person a threat.
2022
UN General Assembly, Report of the Special Rapporteur on the Rights of Persons with Disabilities, Gerard Quinn, A/77/203, July 20, 2022, paras. 30-31, 80(i).
Raises concerns about whether autonomous weapons systems could interpret the actions of persons with disabilities without erroneously determining them to be a threat and calls for multilateral discussions on the effects of autonomous weapons systems on persons with disabilities and their ability to comply with international humanitarian law.
2023
UN Human Rights Council, Human Rights Implications of the Development, Use and Transfer of New Technologies in the Context of Counterterrorism and Countering and Preventing Violent Extremism, Special Rapporteur on the Promotion and Protection of Human Rights and Fundamental Freedoms while Countering Terrorism, Fionnuala Ní Aoláin, A/HRC/52/39, March 1, 2023, p. 2 (summary) and para. 29.
Highlights the dangers that new counterterrorism and security technologies pose to international human rights law, including the principle of non-discrimination and rights to privacy, expression, association, and political participation, and calls for a global prohibition of lethal autonomous weapons systems.
UN Secretary-General António Guterres, Agenda for Peace, Our Common Agenda: Policy Brief 9, July 20, 2023, p. 27.
Details, among the key challenges in the 21st century, the dangers posed by lethal autonomous weapons systems, including to the systems’ “direct threat to human rights and fundamental freedoms,” and calls for a legally binding instrument to prohibit and regulate lethal autonomous weapons systems.
2024
UN Human Rights Council, Autonomous Weapons Systems: Report of Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Morris Tidball-Binz, A/HRC/56/CRP.5, April 16, 2024.
Discusses the threats autonomous weapons systems pose to human rights, including the rights to life and dignity; the challenges of ensuring accountability; and the ways in which proliferation to non-state actors and use in law enforcement situations can exacerbate the human rights problems; calls on states to ban antipersonnel autonomous weapons systems in a legally binding instrument regulating autonomous weapons systems and for states to voluntarily pledge not to use autonomous weapons systems domestically.
UN General Assembly, Strengthening of the Coordination of Emergency Humanitarian Assistance of the UN: Report of the UN Secretary-General António Guterres, A/79/78, May 1, 2024, para. 74.
Underscores the “significant risks” that lethal autonomous weapons pose to civilians, and reiterates the UN secretary-general’s call for a legally binding instrument to prohibit lethal autonomous weapons systems “that function without human control or oversight” and to “regulate all other types of autonomous weapons systems.”
UN Security Council, Protection of Civilians in Armed Conflict: Report of UN Secretary-General António Guterres, S/2024/385, April 22, 2024, para. 55.
Highlights the risks that autonomous weapons systems pose to civilians and reiterates the need for a legally binding instrument to prohibit “autonomous targeting of humans by machines” and autonomous weapons systems that produce unpredictable results.
UN Human Right Council, Report of the Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, Ashwini K.P., A/HRC/56/68, June 3, 2024, paras. 37-39.
Notes the “very serious human rights implications” of autonomous weapons systems and stresses the “serious risk of grave and, in some circumstances, deadly racial discrimination resulting from the use of” such weapons systems.
UN General Assembly, Lethal Autonomous Weapons Systems: Report of the UN Secretary-General António Guterres, A/79/88, July 1, 2024.
Summarizes submissions of states and organizations on autonomous weapons systems, concluding that there is “widespread recognition of the deleterious effects” that lethal autonomous weapons systems could have, including on human rights and through the erosion of existing legal frameworks; reiterates the secretary-general’s call for a legally binding instrument by 2026.
UN General Assembly, Current Developments in Science and Technology and Their Potential Impact on International Security and Disarmament Efforts: Report of the UN Secretary-General António Guterres, A/79/224, July 23, 2024, paras. 74-77.
Discusses the risks of autonomous weapons systems to the principles of distinction, proportionality, and precaution and highlighting concerns about a loss of human control over the use of force.
2025
UN Human Rights Council Advisory Committee, Draft Report on Human Rights Implications of New and Emerging Technologies in the Military Domain, A/HRC/AC/33/CRP.1, February 13, 2025.
Examines how autonomous weapons systems could implicate numerous human rights, including, among others, the rights to life, privacy, and remedy.
General Comments from UN Treaty Bodies
UN treaty bodies, committees of independent experts that monitor implementation of the core international human rights treaties, have issued several general comments that interpret rights relevant to the topic of autonomous weapons systems. The three general comments listed in this section also explicitly address autonomous weapons systems.
2019
UN Human Rights Committee, General Comment No. 36, ICCPR Art. 6: The Right to Life, CCPR/C/GC/36 (2019), para. 65.
States that the “development of autonomous weapon systems lacking in human compassion and judgment raises difficult legal and ethical questions concerning the right to life, including questions relating to legal responsibility for their use.” Concludes that the Human Rights Committee “is therefore of the view that such weapon systems should not be developed and put into operation, either in times of war or in times of peace, unless it has been established that their use conforms with article 6 [the right to life] and other relevant norms of international law.”
2020
UN Human Rights Committee, General Comment No. 37, ICCPR Art. 21: The Right of Peaceful Assembly, CCPR/C/GC/37 (2020), para. 95.
Finds that “[f]ully autonomous weapons systems, where lethal force can be used against assembly participants without meaningful human intervention once a system has been deployed, must never be used for law enforcement during an assembly.”
2024
UN Committee on the Elimination of Racial Discrimination, General Recommendation No. 37, Racial Discrimination in the Enjoyment of the Right to Health, CERD/C/GC/37 (2024), para. 28.
States that “[l]ethal autonomous weapons heighten the risk of systematizing racial bias and dehumanising their targets.”
UN General Assembly Resolutions
The UN General Assembly has taken up the issue of autonomous weapons systems in three resolutions to date. The resolutions have created opportunities to have more in-depth discussions of the systems’ implications for international human rights law in addition to those for international humanitarian law.
2023
UN General Assembly, “Lethal Autonomous Weapons Systems,” A/RES/78/241, December 22, 2023. Adopted by a vote of 152 in favor, 4 opposed, and 11 abstentions.
Stresses the need for the international community to address the risks posed by autonomous weapons systems and requests the UN secretary-general to seek the views of states and other stakeholders on autonomous weapons systems.
2024
UN General Assembly, “Pact for the Future,” A/RES/79/1, September 22, 2024, Action 27.
Calls for urgent discussions under the auspices of the CCW Group of Governmental Experts on the development of an instrument to address lethal autonomous weapons systems.
UN General Assembly, “Lethal Autonomous Weapons Systems,” A/RES/79/62, December 2, 2024. Adopted by a vote of 166 in favor, 3 opposed, and 15 abstentions.
Calls for a “comprehensive and inclusive” approach to addressing the concerns raised by autonomous weapons systems and decides to convene informal consultations in 2025 for states and other stakeholders to discuss the views and proposals presented in the UN secretary-general’s report.
Multilateral Meeting Outcome Statements
Regional and other multilateral meetings on autonomous weapons systems have highlighted the many dangers of the systems, including their human rights consequences. The meetings have also produced outcome documents often endorsed by participating states that express commitments to work toward an international treaty prohibiting and regulating autonomous weapons systems.
2023
Ministry of Foreign Affairs and Worship of Costa Rica, “Communiqué of the Latin American and the Caribbean Conference of Social and Humanitarian Impact of Autonomous Weapons,” Belén, Costa Rica, February 24, 2023.
Latin American and Caribbean (CARICOM) states agree to “collaborate to promote the urgent negotiation of an international legally binding instrument, with prohibitions and regulations with regard to autonomy in weapons systems” to ensure compliance with international law and ethics and to prevent the systems’ social and humanitarian impacts.
Caribbean Community, “CARICOM Declaration on Autonomous Weapons Systems,” CARICOM Conference: The Human Impacts of Autonomous Weapons, Port of Spain, Trinidad and Tobago, September 6, 2023.
CARICOM states highlight the risks, including to human rights, posed by autonomous weapons systems, and commit to work toward a legally binding instrument with prohibitions and regulations on autonomous weapons systems through an inclusive and multidisciplinary approach.
2024
Economic Community of West African States (ECOWAS), “Communiqué of the Regional Conference on the Peace and Security Aspects of Autonomous Weapons Systems: An ECOWAS Perspective,” Freetown, Sierra Leone, April 18, 2024.
ECOWAS states commit to support urgent negotiations of a legally binding treaty regulating autonomous weapons systems in accordance with international law, including international human rights law, and to adopt a cooperative and inclusive approach.
Austrian Federal Ministry for European and International Affairs, “Chair’s Summary,” Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation Conference, Vienna, Austria, April 30, 2024.
Following the Humanity at the Crossroads conference, which brought together states, international organizations, civil society, and others, the conference chair highlighted the plethora of risks, including for human rights, posed by autonomous weapons systems, and urged states to “heed the warning of experts and show the political leadership and foresight” needed to work towards an international legal instrument to regulate autonomous weapons systems.
Acknowledgments
This report was researched and written by Bonnie Docherty, senior arms advisor in the Crisis, Conflict, and Arms Division of Human Rights Watch. She is also a lecturer on law at the International Human Rights Clinic at Harvard Law School (IHRC) and director of the Clinic’s Armed Conflict and Civilian Protection Initiative.
At Human Rights Watch, Mary Wareham, deputy director of the Crisis, Conflict, and Arms division, Mark Hiznay, associate director of the Crisis, Conflict, and Arms division, and Steve Goose, arms campaign director of the Crisis, Conflict, and Arms division edited the report. Anna Bacciarelli, senior artificial intelligence researcher of the Technology, Rights, and Investigations Division, and Zach Campbell, senior surveillance researcher of the Technology, Rights, and Investigations Division provided research and drafting support. James Ross, legal and policy director, and Tom Porteous, acting program director, provided legal and programmatic reviews, respectively.
Specialist reviews were provided by Samer Muscati, deputy director of the Disability Rights division, and Nicola Paccamiccio, UN Geneva advocacy coordinator.
Susan Aboeid, then Arms division coordinator, provided research and development support. Research and production assistance was provided by the arms associate in the Crisis, Conflict, and Arms division. The report was prepared for publication by Travis Carr, publications officer. Kathleen Rose, senior editor, reviewed the news release accompanying the report.