The use of artificial intelligence (AI) systems in warfare is a reality. While its promoters sell this technology as a way to increase the accuracy of attacks and reduce their lethality, the reality is exactly the opposite. “If we look at today’s most technologically sophisticated armed conflicts, we do not see the civilian population coming out better off, but rather widespread and indiscriminate devastation,” Cordula Droege, chief legal officer of the International Committee of the Red Cross (ICRC), told the UN Security Council last month.
This 52-year-old German jurist spoke about three applications of artificial intelligence in the military that, from her organization’s point of view, pose “significant risks”: autonomous weapons systems, such as drones not piloted by remote control; military decision support programs and cyber capabilities. He concluded his speech by calling for the ban and restriction of autonomous weapons systems, a feat he acknowledges as complicated, given the arms race around artificial intelligence. “We have already done this in the past with chemical or biological weapons,” he explains to EL PAÍS in a video conference from the ICRC headquarters in Geneva.
Ask. Are we already seeing the consequences of using artificial intelligence in war scenarios?
Answer. Its consequences are felt by civilians, but also by soldiers. From a legal point of view, two general principles apply. The first is that no weapon should cause unnecessary harm or unnecessary suffering to victims; so, for example, chemical weapons are prohibited. And the second, much more important for new technologies, is that any weapon used on the battlefield must distinguish between combatants and civilians, between military targets and civilian objects. In cyberspace, for example, this barrier is blurred.
Q. Name three applications of AI that particularly concern you. Could you develop them?
R. Artificial intelligence can achieve results that have not been programmed in advance. Autonomous weapon systems are launched, but then choose the target based on an algorithm, meaning the user does not know where, when or what the weapon will hit. Most drones today, as far as we know, are remote controlled. But they could be autonomous and that raises a lot of questions. They can be programmed to target tanks, which are military targets, but what if they are used to attack non-military vehicles carrying weapons? The software will learn by itself and change who, what and where it hits during its operation. Multiply that by the drone swarms and the result is even more unpredictable. In legal terms, this means that the user launches a weapon without knowing whether it will be aimed at a civilian or a combatant, at public infrastructure or at a military target. It is an indiscriminate weapon and, therefore, illegal. For this reason the ICRC recommends banning their use.
We are also concerned about AI-based decision support systems, which can integrate and analyze massive amounts of data in seconds to produce targeting or arrest recommendations, but can condition the user to automatically approve decisions. And we see the third category of risks when AI is used to enhance cyber capabilities: by finding new ways to penetrate enemy computer systems, it increases the risk of indiscriminate attacks and collateral damage to civilian infrastructure.
Q. Do you think it is feasible to pass a treaty banning artificial intelligence in autonomous weapons systems?
R. We ask for a mix of bans and limitations, because not all systems are problematic. For example, an anti-missile system can be autonomous because it targets military targets. These systems just need to be limited: for example, by forcing them to be placed in places where civilians are not present. There is a taboo on the use of autonomous weapons against humans. States say they are not used for that. From the ICRC’s point of view, it is very important to maintain this taboo. For both ethical and legal reasons: It will be very difficult to distinguish between combatants and civilians on complex battlefields where civilians sometimes enter and exit conflict situations. We don’t believe humans should be targeted by algorithms.
Q. Is there always someone pushing the button? Or are there already autonomous weapons that operate completely autonomously?
R. As far as we know, drones operate via remote control. But, of course, what states are trying to develop, and probably have already developed or even implemented, are systems that work on their own in areas with interference or communications breakdowns. In those contexts, you pressed the button at first, but you don’t know when or where it will aim and hit the target. This is what we are trying to stop and limit.
Q. Would the Lavender algorithm, used by the Israeli army to select targets for its bombing in Gaza, be illegal because it directly targets people?
R. The ICRC does not refer to specific programmes. What I can tell you is that one of the categories of artificial intelligence applied to war that we denounce are programs that help decision makers. Once the action is determined by a system, then it becomes an autonomous weapon.
Q. The justification for Lavender and other similar programs is that they allow you to make much more informed decisions.
R. We don’t see it that way. The military’s reliance on artificial intelligence to collect data could make its targets more indiscriminate. When new weapons are developed, it is often said that they will create greater precision on the battlefield, be more reliable and protective of civilians. But history proves the opposite: new technologies have not created more humane conflicts, but rather more devastating ones.
Q. Who is legally responsible for the action of a fully autonomous weapon?
R. International humanitarian law establishes that in a conflict two or more parties confront each other, be they states or non-state armed groups. Since the Nuremberg Tribunal of 1945, individuals have also been responsible for their decisions, especially if they commit war crimes. So we have two levels of responsibility: the parties to the conflict and the commander or soldier who makes the decision.
Q. Do you think the introduction of AI into war scenarios marks a turning point?
R. YES. The history of armed conflicts is the history of the deployment of new weapons on the battlefield. We will have to be very careful to understand the humanitarian consequences we will face. There will always be pressure to incorporate new technologies into warfare. I don’t rule out that sometimes it’s for good reasons, but usually it’s to have more lethality, more firepower and more speed than your enemy. Humanitarian considerations are generally not taken into account.
Q. What comments did you receive after your speech to the United Nations Security Council?
R. Our requests are not new. From 2021 we promote a treaty on autonomous weapons systems. For over 20 years we have been warning about the problems of cyber warfare, which can knock out hospitals or power plants. The work of the ICRC has been to draw the attention of States to the fact that this has legal, ethical, human and social consequences that must be taken into account. I think most states are now calling for a treaty on autonomous weapons systems.
Q. Do you think AI is amplifying destruction in wars?
R. Yes, and we will see it more and more. The goal of developing new technologies is to hit more enemies harder, faster, and in greater numbers. I don’t see how, for example, with swarms of drones there will be less lethality. It is true that most people killed in conflicts die at the hands of conventional weapons such as artillery, mortars or AK47s. But new technologies, such as drones, are also becoming increasingly cheaper and more available. So, simply, the number and type of weapons available will multiply. From the moment a weapon exists and is produced, difficulties in controlling it begin.
Q. Over its century and a half history, the ICRC has called for a ban on many weapons. What makes you think the world will take AI seriously?
R. Treaties banning chemical and biological weapons are very successful because they have created a taboo. Over the last 100 years we have seen very few uses of these weapons. We must build on these successes and demonstrate that it is possible to unite states. That said, the current environment is not very favorable in the sense that an arms race is already underway. These weapons are considered strategic. We have no choice but to try, we owe it to our children.
