© 2012 Russell Christian for Human Rights Watch
The United States along with China, Israel, South Korea, Russia and the United Kingdom have been investing in developing weapons systems with decreasing levels of human control in the critical functions of selecting and engaging targets. The fear is that as the human role decreases, these so-called ‘killer robots’ will eventually take over these critical functions.

Armed drones are an example of this trend toward ever-greater autonomy, but they are still operated by a human who takes the decision to select and fire on targets.

A central concern with fully autonomous weapons is that they will cross a moral line that should never be crossed by permitting machines to make the determination to take a human life on the battlefield or in policing, border control and other circumstances.

On 13 November 2017, representatives from about 80 countries will meet at the United Nations in Geneva to discuss questions relating to what they call lethal autonomous weapons systems. Since their last meeting on the issue in April 2016, concerns have continued to mount over these future weapons. At the same time, there is a debate about whether states at the Convention on Conventional Weapons (CCW) can address this challenge by negotiating a new CCW protocol that bans or restricts these weapons.

Given that countries would not want to fall behind in potentially advantageous military technology, the development of these revolutionary weapons would be likely to lead to an arms race, unless action to put a stop to the whole process is taken now. High-tech militaries might have an edge in the early stages of these weapons’ development, but as costs go down and the technology proliferates, these weapons would likely be mass-produced.

Life-and-death decisions

Qualities such as compassion and empathy in addition to human experience make humans uniquely qualified to make the moral decision to apply force in particular situations. No technological improvements can solve the fundamental challenge to humanity that will come from delegating a life-and-death decision to a machine. Any killing orchestrated by a fully autonomous weapon is arguably inherently wrong since machines are unable to exercise human judgment and compassion.

Humans find it difficult in many circumstances to reliably distinguish between lawful and unlawful targets, but fully autonomous weapons are even more unlikely to reliably make such distinctions, as required by international humanitarian law. While the capabilities of future technology are uncertain, it is highly doubtful that it could ever replicate the full range of inherently human characteristics necessary to comply with the rules of distinction and proportionality.

These weapons also have the potential to commit unlawful acts for which no one could be held responsible. Existing mechanisms for legal accountability are ill-suited and inadequate to address the unlawful harm that fully autonomous weapons would be likely to cause.

One driver behind fully autonomous weapons is the desire to process data and operate at greater speed than for weapons controlled by humans at the targeting and/or engagement stages. Such weapons could also operate without a line of communication after they are deployed.

Yet because fully autonomous weapons would have the power to make complex determinations in less structured environments, their speed could lead armed conflicts to spiral rapidly out of control. And regardless of their speed, their ability to operate without a line of communication after deployment would be problematic because the weapons could make poor, independent choices about the use of force.

Since fully autonomous weapons could operate at high speeds and without human control, their actions would also not be tempered by human understanding of political, socio-economic, environmental and humanitarian risks at the moment they engage. Thus, they could trigger a range of unintended consequences, many of which could fundamentally alter relations between states or the nature of ongoing conflicts.

While fully autonomous weapons might create an immediate military benefit for some states, they should recognise that such advantages would be short-lived once these weapons begin to proliferate. Ultimately, the financial and human costs of developing such weapons systems would leave each state worse off.

Campaign to Stop Killer Robots

For these and other reasons, non-governmental organisations have established the Campaign to Stop Killer Robots to work for a preemptive ban on development, production and use of weapons systems that, once activated, would select and fire on targets without meaningful human control.

Since 2013, 19 countries have endorsed this ban objective and dozens more have affirmed the importance of retaining meaningful or appropriate or adequate human control over critical combat functions of weapons systems. Yet multilateral deliberations on this topic have proceeded at a snail’s pace while technology that will enable the development of fully autonomous weapons bounds ahead.

While international humanitarian law already sets limits on problematic weapons and their use, responsible governments have in the past found it necessary to supplement existing legal frameworks for weapons that by their nature pose significant humanitarian threats.

Some contend that conducting weapons reviews before developing or acquiring fully autonomous weapons would sufficiently regulate the weapons. Weapons reviews are required under Article 36 of Additional Protocol I to the Geneva Conventions to assess the legality of the future use of a new weapon during its design, development and acquisition phases.

Yet weapons reviews are not universal, consistent or rigorously conducted, and they fail to address the implications of weapons outside of an armed conflict context. Few governments conduct weapons reviews and those that do follow varying standards. Reviews are often too narrow in scope sufficiently to address every danger posed. States are also not obliged to release their reviews, and none are known to have disclosed information about a review that rejected a proposed weapon.

A binding, absolute ban on fully autonomous weapons would reduce the chance of misuse of the weapons, would be easier to enforce, and would enhance the stigma associated with violations.

Moreover, a ban would maximise the stigmatisation of fully autonomous weapons, creating a widely recognised norm and influencing even those that do not join the treaty. Precedent shows that a ban would be achievable and effective.

After three years of informal talks with no outcome, it’s time for states to negotiate and adopt an international, legally binding instrument that prohibits the development, production and use of fully autonomous weapons. If that is not possible under the auspices of the CCW, states should explore other mechanisms to ban fully autonomous weapons without delay. The future of our humanity depends on it.