Fully autonomous weapons have the potential to increase harm to civilians during armed conflict. They would be unable to meet basic principles of international humanitarian law, they would undercut other, non-legal safeguards that protect civilians, and they would present obstacles to accountability for any casualties that occur. Although fully autonomous weapons do not exist yet, technology is rapidly moving in that direction. These types of weaponized robots could become feasible within decades, and militaries are becoming increasingly invested in their successful development. Before it becomes even more challenging to change course, therefore, states and scientists should take urgent steps to review and regulate the development of technology related to robot autonomy. In particular, states should prohibit the creation of weapons that have full autonomy to decide when to apply lethal force.
To achieve these goals, Human Rights Watch and IHRC recommend:
To All States
Prohibit the development, production, and use of fully autonomous weapons through an international legally binding instrument.
States should preemptively ban fully autonomous weapons because of the threat these kinds of robots would pose to civilians during times of war. A prohibition would ensure that firing decisions are made by humans, who possess the ability to interpret targets’ actions more accurately, have better capacity for judging complex situations, and possess empathy that can lead to acts of mercy. Preserving human involvement in the decision-making loop would also make it easier to identify an individual to hold accountable for any unlawful acts that occur from the use of a robotic weapon, thus increasing deterrence and allowing for retribution.
This prohibition should apply to robotic weapons that can make the choice to use lethal force without human input or supervision. It should also apply to weapons with such limited human involvement in targeting decisions that humans are effectively out of the loop. For example, a human may not have enough time to override a computer’s decision to fire on a target, or a single human operator may not be able to maintain adequate oversight of a swarm of dozens of unmanned aircraft. Some on-the-loop weapons could prove as dangerous to civilians as out-of-the-loop ones. Further study will be required to determine where to draw the line between acceptable and unacceptable autonomy for weaponized robots.
Adopt national laws and policies to prohibit the development, production, and use of fully autonomous weapons.
National measures could serve as means of prohibition before the creation of an international instrument. They could also raise awareness of the problems of fully autonomous weapons and help establish best practices on how to deal with them.
Commence reviews of technologies and components that could lead to fully autonomous weapons. These reviews should take place at the very beginning of the development process and continue throughout the development and testing phases.
Such early and ongoing reviews help ensure that states do not develop weapons, like fully autonomous weapons, that fail to comply with international humanitarian law. States should make public their determinations about a weapon’s or technology’s ability to meet legal standards because transparency can allow for monitoring and confidence building. In addition, transparency would allow reviews to facilitate public debate about the problems and potential solutions.
To Roboticists and Others Involved in the Development of Robotic Weapons
Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.
A code of conduct for those involved with developing robotic weapons could help ensure that such technology evolves in accordance with the legal and ethical frameworks that protect civilians in armed conflict. Academic and scientific associations could draft and distribute the code. Codes of conduct for military technological development already exist in the fields of synthetic biology and nanotechnology. They serve to increase transparency in research agendas and encourage researchers to adopt socially responsible approaches to scientific development.
 Gary E. Marchant, et al., “International Governance of Autonomous Military Robots,” The Columbia Science and Technology Law Review, p. 307.