Over the past year, the development of emerging technologies, some with artificial intelligence (AI), has increased in all sectors of our societies, including in the military. Certain observers have described the development of AI as an Oppenheimer moment - when the military applications of the technology risk taking humanity across a dangerous line.
In conflicts around the world, wars are being digitized at a rapid pace. In Ukraine, new weapons systems incorporating autonomy are being developed for use on the battlefield. In Libya, the United Nations has reported that fighters “were subsequently hunted down and remotely engaged” by loitering munitions, which linger in the air while searching for a target, which it attacks once detected. In Burkina Faso and Ethiopia, militaries have used armed drones in attacks that killed civilians. In Gaza, the Israeli military has used digital support systems called 'Gospel' and 'Lavender' to inform its targeting decisions.
The risks are enormous. Today, humans still make the final decisions on specific attacks, but technology is developing rapidly. Scientists, AI experts, and nongovernmental organizations such as Human Rights Watch have been warning for over a decade about the development of autonomous weapons systems, sometimes referred to as 'killer robots'. Soon, the world could be facing a shift in which weapon systems will be able to select targets and open fire - without any human intervention.
In a deteriorating security climate in Europe, there are strong calls for rapid development of military technology. But this development should not be naïve or limitless. There are strong objections to developing weapons systems that lack meaningful human control and that can target people. Entrusting life and death decisions to a machine, thereby reducing humans to data points, is morally unacceptable.
These systems lack the ability to interpret context or make assessments in unforeseeable and quickly changing environments - such as determining whether a target is military or civilian, whether an armed person poses a threat, or an injured soldier has surrendered. These determinations require human judgment, ethics and empathy.
Autonomous weapons systems will also not be limited to the battlefield.
A new report by Human Rights Watch explores how such weapons could be used during peacetime in law enforcement, border control and other circumstances. Autonomous weapons systems would lack the ability to interpret complex situations and to accurately approximate human judgment and emotion, elements that are essential to lawfully using force in accordance with the rights to life and peaceful assembly.
Autonomous weapons systems relying on artificial intelligence would likely be discriminatory due to developers’ biases and the inherent lack of transparency of machine learning.
Such weapons systems would violate human rights throughout their life cycle: the mass surveillance necessary for their development and training would undermine the right to privacy, while the accountability gap of these black-box systems would infringe upon the right to a remedy after an unlawful attack.
Autonomous weapons systems may be cheap to produce and easily fall into the hands of organized crime or terrorist groups. They will pose an entirely new threat to our societies. This technology also risks paving the way for a new kind of surveillance, abuse and repression by authoritarian regimes.
The international community needs to act now - by creating a new international treaty to prohibit and regulate autonomous weapons systems. More than 120 countries are now on record as calling for the adoption of such a treaty. UN Secretary-General António Guterres and the president of the International Committee of the Red Cross, Mirjana Spoljaric, have urged UN member states to agree on a legally binding instrument by 2026.
Concerned governments will convene at the UN General Assembly in New York from May 12 to 13 to consider for the first time the challenges raised by autonomous weapons systems and how to address them. Human Rights Watch’s new report traces support for a new international treaty and its elements.
Sweden should give clear support for opening negotiations on a treaty
The government has kept a low profile on the issue, but has not made a commitment to work for a new international treaty, unlike its neighbor Norway.
The government's AI Commission recently submitted its final report, Roadmap for Sweden, which states that Sweden needs to develop AI in a sustainable and safe way and be able to promote sustainable and safe AI development in the defense industry. There should be no contradiction between promoting technological development and addressing threats to humanity. On the contrary, as a technology-friendly nation with strong roots in democratic values and international law, Sweden is particularly well placed to pursue this issue with credibility.
Sweden should promptly:
1. Take a position in favor of an international law-based and ethical approach to new technology - which means adopting new legally binding rules on autonomous weapon systems.
2. Support prohibitions and regulations on autonomous weapons within the framework of the UN General Assembly, where all countries have an equal voice and, unlike the Security Council, no country has a veto.
3. Use the upcoming UN meeting in May to push for a concrete negotiating mandate for a legally binding instrument, together with like-minded states.
At this Oppenheimer moment, wait and see is not an option. Sweden should act now, before it is too late.