When the international debate over fully autonomous weapons began in 2013, a common question was whether robots or machines would perform better than humans. Could providing more autonomy in weapons systems result in greater accuracy and precision? Could such weapons increase compliance with international humanitarian laws because they would not rape or commit other war crimes? Would they “perform more ethically than human soldiers,” as one roboticist claimed?
For years, roboticist Noel Sharkey, a professor at Sheffield University in England, warned that computers may be better than humans at some tasks, but killing is not one of them. Sharkey and his colleagues became increasingly alarmed that technological advances in computer programming and sensors would make it possible to develop systems capable of selecting targets and firing on them without human control.
They warned that autonomous weapons systems would be able to process data and operate at greater speed than those controlled by humans. Complex and unpredictable in their functioning, such systems would have the potential to make armed conflicts spiral rapidly out of control, leading to regional and global instability. Autonomous weapons systems would be more likely to carry out unlawful orders if programmed to do so, due to their lack of emotion and the fact that morality cannot be outsourced to machines.
With military investments in artificial intelligence and emerging technologies increasing unabated, Sharkey and his colleagues demanded arms control. Yet China, Israel, Russia, South Korea, Britain, the United States, and other military powers have continued their development of air, land, and sea-based autonomous weapons systems.
My organization, Human Rights Watch, took a close look at these investments and the warnings from the scientific community. It didn’t take long to see how allowing weapons systems that lack meaningful human control would undermine basic principles of international humanitarian law and human rights law, including the rights to life and remedy and protecting human dignity. Their use would raise a substantial accountability gap when it comes to removing human control from the use of force, finding that programmers, manufacturers, and military personnel could all escape liability for unlawful deaths and injuries caused by fully autonomous weapons.
As we talked to other groups, the list of fundamental ethical, moral, and operational concerns grew longer. It became clear that delegating life-and-death decisions to machines on the battlefield or in policing, border control, and other circumstances is a step too far. If left unchecked, the move could result in the further dehumanization of warfare.
In 2013, Human Rights Watch and other human rights groups established the Campaign To Stop Killer Robots, to provide a coordinated voice on these concerns and to work to ban fully autonomous weapons and retain meaningful human control over the use of force.
Within months, France convinced more than 100 countries to open diplomatic talks on how to respond to questions raised by lethal autonomous weapons systems. Before then, no government had considered such questions or met with other states to discuss them. As happens so often, there was no response until scientists and civil society raised the alarm.
None of the nine United Nations meetings held since 2014 on killer robots have focused at any length on how better programming could be the solution. There remains a lack of interest in discussing whether there are potential benefits or advantages to removing meaningful human control from the use of force. This shows how technical fixes proposed years ago are, on their own, not an adequate or appropriate regulatory response.
Instead, the legal debate continues over the adequacy of existing law to prevent civilian harm from fully autonomous weapons. There’s growing acknowledgment that the laws of war were written for humans and cannot be programmed into machines.
Indeed, by 2020 the issue of removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action. Political leaders are waking up to this challenge and are working for regulation, in the form of an international treaty.
A new international treaty to prohibit and restrict killer robots has been endorsed by dozens of countries, UN Secretary General António Guterres, thousands of artificial intelligence experts and technology sector workers, more than 20 Nobel Peace laureates, and faith and business leaders.
In addition, the International Committee of the Red Cross sees an urgent need for internationally agreed-upon limits on autonomy in weapon systems to satisfy ethical concerns (the dictates of the public conscience and principles of humanity) and ensure compliance with international humanitarian law.
In his address to the United Nations last month, Pope Francis commented on killer robots, warning that lethal autonomous weapons systems would “irreversibly alter the nature of warfare, detaching it further from human agency.” He urged states to “break with the present climate of distrust” that is leading to “an erosion of multilateralism, which is all the more serious in light of the development of new forms of military technology.”
Yet US political leaders, from the Trump administration to Congress, have been largely silent on calls for regulation. US officials claim a 2012 Pentagon directive “neither encourages nor prohibits the development” of lethal autonomous weapons systems." The directive was updated in 2017 with minimal change, and still explicitly permits development of such weapons systems.
A new international treaty to prevent killer robots will happen with or without the United States. As a new report by Human Rights Watch shows, there is ample precedent for such a treaty. Existing international law and principles of artificial intelligence show how it is legally, politically, and practically possible to develop one.
The next US administration should review its position on killer robots in the context of the leadership role it wants to take in the world. It should accept that an international ban treaty is the only logical outcome for the diplomatic talks. Technological fixes in this case are not the answer.