Skip to main content

What would happen if countries took a step beyond remote-controlled drones and used weapons that targeted and killed people on their own, without any human intervention? Who would be responsible if one of these weapons made a fatal mistake, and who could be punished? The answer is no one.

Such fully autonomous weapons, or “killer robots,” are under development in several countries. But the robots’ use of force would undermine the fundamental legal and moral principle that people should be held responsible for their wrongdoing.

Countries and nongovernmental groups around the world have been working for two years now to figure out how to deal with these weapons before they are in production. In April, representatives from 90 countries met at the United Nations in Geneva for their second round of talks on what to do about “lethal autonomous weapons systems.”

Accountability was a major theme at this year’s talks. It helps deter future violations, provides retribution for victims, and reflects social condemnation of the unacceptable act. But it would be sorely lacking when it comes to future weapons that would select and engage targets on their own.

Fully autonomous weapons would also raise a host of other concerns. These weapons would make their own determinations about who should be killed, a prospect that many people find repugnant. It would be difficult to make these weapons comply with international law’s protections of civilians in armed conflict and other situations such as border control and law enforcement operations. The weapons present a significant risk of an arms race and proliferation to rogue states and non-state armed groups.

A report I wrote in advance of last month’s meeting for Human Rights Watch and Harvard Law School’s International Human Rights Clinic, Mind the Gap: The Lack of Accountability for Killer Robots, helped generate discussion about accountability. We found that in most cases no one would be held legally responsible for the actions of a fully autonomous weapon. The programmer, manufacturer, commander, and operator would all escape liability.

Under international criminal law, a commander might be found guilty of intentionally misusing a robot to kill civilians, but the commander could not be held legally responsible if the weapon acted in an unanticipated way. Such situations would likely occur with a weapon that lacked meaningful human control.

And even if a commander had warning that a weapon was going astray, he or she might be unable to prevent it. Fully autonomous weapons are frequently touted as militarily beneficial because of faster-than-human processing speeds, but these speeds would interfere with a commander’s ability to stop them midstream.

If you could not hold someone responsible under criminal law, could the victim still sue under civil law? A responsibility gap would likely exist under this body of law as well. At least in the United States, the military and military contractors are generally immune from suit. Even if immunity could be overcome, the people bringing suit would find it difficult to prove liability in a case involving highly advanced technology.

An alternative approach proposed for other forms of autonomous technology, such as autonomous cars, is a no-fault compensation scheme. Under such a scheme, victims would need to prove only that they were harmed, not that the harm was foreseeable or the result of a product’s defect. While financial compensation could benefit victims of fully autonomous weapons, it would not close the accountability gap. The absence of a finding of fault would undercut deterrence, retribution, and social condemnation.

The best solution is a pre-emptive ban on the development, production, and use of fully autonomous weapons. A number of countries have joined Human Rights Watch, the Campaign to Stop Killer Robots, and others in calling for such a prohibition. A ban is easier to enforce than regulation, and increases stigmatization, a powerful tool in international law.

Merely regulating the weapons also leaves room for misuse. Once the weapons entered their arsenals, countries could be tempted to use them inappropriately, as has been the case with other widely condemned weapons, such as cluster munitions.

A prohibition should be adopted now because the more nations invest in the technology, the less willing they will be to give it up. By ensuring that all weapons retain meaningful human control, countries could help protect the principle of personal accountability on the battlefield and the police beat.

Your tax deductible gift can help stop human rights violations and save lives around the world.