Armed Conflicts, Eye on the IFIs, Featured, Global, Global Governance, Headlines, IPS UN: Inside the Glasshouse, Peace, TerraViva United Nations

Opinion

The Slippery Slope to Autonomous Killing Machines

Maaike Beenes is Senior Programme Officer Humanitarian Disarmament

Credit: United Nations

UTRECHT, The Netherlands, Nov 11 2019 (IPS) - Would you trust an algorithm with your life? If that thought makes you uncomfortable, then you should be concerned about the artificial intelligence (AI) arms race that is secretly taking off, fueled by the arms industry.

Weapon systems that can select and attack targets autonomously, without real human control, are moving from science fiction to reality.

Take for example the Warmate 2. This Polish-made missile loiters over an area, controlled remotely by an operator, but can go into fully autonomous mode once a target has been identified.

Or the Dual-Mode Brimstone, a guided missile that can be assigned a target area after which it can find targets matching a predefined target type.

Right now these weapons are under human control, but the technology is designed to keep humans out of the picture. We are already well on our way down a very slippery slope.

For our new report* that we publish this week, we surveyed 50 weapons producers about their work on increasingly autonomous systems. The results show that although existing systems are still partly controlled, often remotely, by human operators, the industry is rapidly moving towards more and more autonomous systems.

In addition to asking the 50 companies to participate in the survey with questions about their policy and activities, the report analysed publicly available sources about the systems they are developing and military contracts they have already won.

Maaike Beenes

We found only four companies that we could classify as showing ‘best practice’ because they have in place a policy or statement to not develop lethal autonomous weapons. 30 companies, however, are of ‘high concern’.

These companies are all working on technologies most relevant to lethal autonomous weapons while not having clear policies on how they ensure meaningful human control over such weapons.

The group of high concern companies includes three of the world’s largest arms producers: Lockheed Martin, Boeing and Raytheon (all US), as well as AVIC and CASC (China), IAI, Elbit and Rafael (Israel), Rostec (Russia) and STM (Turkey).

Turkey’s state-owned weapons producer STM, for example, has developed the Kargu system. The Kargu is a kamikaze drone that flies to an area based on preselected coordinates and can then select targets based on facial recognition.

Some reports suggest the Kargu will soon be deployed on the Turkish-Syrian theater. This loitering munition may very soon cross the threshold to a weapon system without meaningful human control.

The results of this research are deeply concerning. Lethal autonomous weapon systems, which select and attack targets without meaningful human control, raise a host of legal, security and ethical concerns.

Crucially, removing the human from the ultimate kill-decision means delegating the decision to end a human being’s life to an algorithm-operated machine. This is fundamentally opposed to the right to life and human dignity.

An unarmed drone deployed to a UN peacekeeping mission. Credit: United Nations

But there are not just ethical concerns. Lethal autonomous weapons systems would be able to operate at speeds incomprehensible to humans.

Their high levels of autonomy would also make it very difficult to predict how they will react to unanticipated events, as we have already seen with accidents with self-driving cars. Any such unintended actions would significantly raise the risk of conflict escalation.

Lethal autonomous weapons are therefore not only unethical, but also pose a serious risk to international peace and security. It is also highly unlikely they would be able to comply with the key principles of International Humanitarian Law (IHL).

IHL requires distinguishing between civilians and combatants and to assess for each attack whether the civilian harm that would be caused by an attack is proportional to the expected military advantage. These are all highly context-dependent considerations, and that is exactly what algorithms are really bad at.

These concerns have sparked intense debates among states, which have discussed autonomous weapons at the UN Convention on Certain Conventional Weapons (CCW) since 2013.

These discussions have been productive in the sense that it has become clear the large majority of states want to ensure meaningful human control over the use of force.

Currently 30 states have already called for a preventive treaty that prohibits lethal autonomous weapons and ensures such human control. However, the debate is being stalled by a handful of countries that are enabling a global AI-arms race.

It is urgent for states to take action now that the development of lethal autonomous weapons can still be prevented rather than cured. Adopting new international law is the most effective way to do that.

It is clear that most states are ready to take their responsibility but as they meet this week in Geneva for the annual Meeting of High Contracting Parties to the CCW, it will become clear whether they are capable of making sufficient progress to prevent the world from a disastrous revolution in warfare.

The link to the report: https://www.paxforpeace.nl/slippery-slope

 
Republish | | Print |

Related Tags



katie williams author