Elected officials and local authorities across the United States and around the world should consider replicating an innovative legislative proposal that would prohibit police from arming robots used in their law enforcement operations.
The bill, introduced on March 18 by New York City council members Ben Kallos and Vanessa Gibson, would “prohibit the New York City Police Department (NYPD) from using or threatening to use robots armed with a weapon or to use robots in any manner that is substantially likely to cause death or serious physical injury.”
The proposed law comes after a social media outcry over the use of an unarmed 70-pound ground robot manufactured by Boston Dynamics in a policing operation last month in the Bronx. US Representative Alexandria Ocasio-Cortez criticized its deployment “for testing on low-income communities of color with under-resourced schools” and suggested the city should invest instead in education.
In a statement published in Wired and other news outlets, Boston Dynamics CEO Robert Playter said that the company’s robots “will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.” Playter also said that the company prohibits customers from attaching weapons to its robots. The company’s terms of service require buyers of its ground robot — which is unarmed — to not intentionally use it “to harm or intimidate any person or animal, as a weapon, or to enable any weapon.” Other technology companies such as Paravision, Skydio, and Clearpath Robotics have similar measures in place.
Such contractual requirements are a start, but laws are needed to ensure police forces don’t ignore these dangers as they expand their use of artificial intelligence and emerging technologies. Pledges not to weaponize robots will not prevent a future of digital dehumanization and automated killing.
Fully autonomous weapons systems need to be prohibited in all circumstances, including in armed conflict, law enforcement, and border control, as Human Rights Watch and other members of the Campaign to Stop Killer Robots have advocated.
Allowing machines to select and attack humans without meaningful human control crosses a moral line. Regulation in the form of new laws is the only viable option when faced with the serious ethical, legal, operational and other challenges raised by the removal of human control from the use of force.