Skip to main content

Killer Robots, or fully autonomous weapons that are a big step even beyond remote-control drones, have no business on the battlefield. If these robots unlawfully killed civilians, who would be held responsible? Under international and domestic law, most likely no one, and that is frightening. Human Rights Watch’s Bonnie Docherty explains this to Amy Braunschweiger.

What exactly are Killer Robots?

They are fully autonomous weapons that would select and engage targets without meaningful human control. In other words, the robots would identify whom they wanted to shoot and then determine when to fire. They don’t exist yet, but technology is moving rapidly in that direction. The world’s leaders in autonomous technology – the US, the UK, Israel, Russia, China and South Korea – have already developed or deployed precursors.  

There’s a host of problems with these weapons. First, they would have problems complying with international law because they would lack certain human qualities. If a fully autonomous weapon saw a person, it would often be unable to tell if the person was a threat or a scared civilian. Also, there’s a moral dilemma – should a machine be making life-and-death decisions, even about killing soldiers? We’re also very concerned about a lack of accountability.

Why is accountability so important?

Without accountability you have no way to deter future violations of international law, and no retribution for victims of past violations. A robot could not be programmed to deal with every situation it might encounter in the field. So if it unlawfully killed a civilian instead of a soldier, who would be responsible? We found that programmers, manufacturers, and military commanders would all be likely to escape liability.

How does this look under criminal law?

Criminal law, under which a government prosecutes a person, is especially important for accountability – it has harsher penalties, like prison sentences, and carries the weight of social condemnation.

Under criminal law, if a military commander used the fully autonomous weapons intentionally to kill civilians, he or she would be held accountable. But we’re concerned about situations when the robot acted in an unpredictable way. In these cases, it would be unfair and legally impossible to hold commanders accountable for something that they could not foresee or prevent.

How about under civil law?

It would be nearly impossible to hold anyone responsible under civil law, the system that allows an individual to sue another person or a corporation. In the US, military and defense contractors are immune from being sued. Many other countries have similar systems.

Some people contend that you could sue under product liability law, the way you would if a car malfunctioned. But the victims of killer robots would have to overcome significant evidentiary hurdles to prove a defect in such complex equipment as autonomous technology.

Even if a victim of an unlawful killer robot strike won a civil lawsuit, the victim would only be awarded monetary compensation. No one would be punished, and the penalty wouldn’t adequately deter others from repeating the crime.

What do we really want when it comes to killer robots?

We’re calling for an absolute ban. We want to see an international treaty prohibiting the development, production, and use of these weapons as well as national laws banning them. There is precedent. In 1995, nations agreed to preemptively ban blinding laser weapons.

From the military perspective, there have to be some advantages to using killer robots over remote-controlled drones. What are they?

Proponents want the robots because they would have faster-than-human processing speeds. But that speed would pose accountability problems.  How could a commander prevent the robot from acting if the robot responded faster than a human? They couldn’t.

Also, proponents tout the fact that the robots could continue operating if communications were broken. But again, how could the commander prevent the robot from taking certain actions if it was out of communication? You couldn’t stop the robot from harming people.

In the US, we’ve seen local police forces using weaponry that trickled down from the military. Could this happen with killer robots, too?  

Military technology does make its way down to local police forces, and we’re worried these weapons may be used in policing. Under human rights law, which applies in peacetime, their use would challenge the right to life, right to remedy, and right to dignity.

Could we regulate this weapon instead of banning it?

No, a ban is necessary. It’s clearer, it eliminates room for interpretation, it’s easier to enforce, and it creates greater stigma against the weapons. If you only regulated them, once countries had these robots in their arsenals, they would be tempted to use them illegally or the weapons would make their way to armed groups that with no regard for international humanitarian or human rights law.

 

 

 

Your tax deductible gift can help stop human rights violations and save lives around the world.