Skip navigation

Artificial Intelligence in Warfare: Navigating the Legal Challenges Under International Humanitarian Law

Oct. 28, 2024   •   Nandini Shaw

Introduction

The rapid integration of Artificial Intelligence (AI) in military operations,

particularly through the deployment of autonomous weapon systems (AWS), has

transformed the landscape of modern warfare. AWS, capable of identifying and

engaging targets without human intervention, present both opportunities and

profound legal challenges. On one hand, AI promises greater precision and

reduced risk to human soldiers; on the other, it raises concerns about compliance

with International Humanitarian Law (IHL), especially concerning the

principles of distinction, proportionality, and accountability.

This article addresses these challenges by analyzing the legal implications of AI in

warfare under. It examines whether AI can comply with IHL provisions and

who should be held responsible when AI systems cause unlawful harm. The article

is structured into three main sections: the compliance of AWS with IHL principles,

accountability for AI-driven military actions and ethical considerations

regarding human control over AI in warfare. Landmark case laws are discussed to

contextualize these issues.

Background

International Humanitarian Law, particularly the Geneva Conventions and

their Additional Protocols, regulates the conduct of warfare. The main IHL

principles relevant to the use of AI in warfare are:

  1. Distinction: Article 48 of Additional Protocol I to the Geneva

Conventions (1977) mandates that parties distinguish between civilians and

combatants. Attacks must be directed only at military objectives.

2. Proportionality: Article 51(5)(b) of the Additional Protocol I prohibits attacks

that may cause excessive civilian harm relative to the anticipated military

advantage.

3. Military Necessity: Article 52 of the Additional Protocol I allow measures

necessary for achieving a legitimate military objective, but these must comply with

IHL’s humanitarian concerns.

While these principles were crafted for human-directed warfare, the use of AI

introduces questions about whether autonomous systems can adhere to these

provisions and how violations should be addressed.

Section 1:

Autonomous Weapon Systems and the Principle of Distinction

The principle of distinction is foundational to IHL, requiring combatants to

distinguish between civilian and military targets. AWS, however, faces

significant challenges in this regard due to the complexity of real-time

decision-making in unpredictable battlefield environments.

Prosecutor v. Tadic (ICTY, 1999): In this case, the International Criminal

Tribunal for the Former Yugoslavia (ICTY) reaffirmed the importance of

distinguishing between combatants and non-combatants. The failure to

distinguish civilians from military targets was classified as a war crime. AWS,

which operates based on predefined algorithms, may not be able to consistently

make this distinction, increasing the risk of unlawful targeting.

AWS are trained on historical data and operate within rigid parameters, meaning

they may struggle to adapt to rapidly changing scenarios or the presence of

irregular combatants, potentially leading to indiscriminate attacks.

Section 2:

Proportionality and Civilian Protection

The principle of proportionality dictates that any harm inflicted on civilians during an attack must not be excessive in relation to the expected military gain. This principle poses particular challenges for AI systems, as proportionality assessments require nuanced, qualitative judgments about factors like value, risk, and military necessity—decisions that AI may struggle to make effectively.

Nuclear Weapons Advisory Opinion (ICJ, 1996): In this advisory opinion, the

International Court of Justice (ICJ) discussed proportionality in the context of

nuclear weapons, emphasizing that any use of force must carefully balance

military advantage with civilian harm. While the case dealt with nuclear

weapons, the proportionality analysis is relevant to AWS. AI systems may not

have the capacity to make these nuanced judgments, risking disproportionate

harm to civilians.

AWS operates based on algorithmic decision-making that lacks human

element needed to weigh these competing interests. There is also the risk of errors

in AI’s data interpretation, which could lead to excessive use of force.

Section 3:

Accountability for AI in Warfare

One of the most significant legal gaps in the use of AI in warfare concerns

accountability. Traditional IHL assigns responsibility to military commanders and states for violations of the laws of war. With AWS, the diffusion of responsibility

between AI developers, military operators, and commanders creates uncertainty

over who should be held liable for unlawful acts.

Prosecutor v. Jean-Paul Akayesu (ICTR, 1998): The concept of command

responsibility was pivotal in this case, where commanders were held responsible

for the actions of their subordinates. The principle of command responsibility

could extend to AI, where military commanders deploying AWS may still be held

accountable for their unlawful actions. However, the autonomous nature of AWS

complicates this attribution, as the system may act independently of direct human

control.

If an AI system operates based on faulty programming or poor data, the liability

may extend to the developers or the state that deployed the system. This

ambiguity in accountability raises the need for a new legal framework that

defines responsibility in AI-driven warfare.

Discussion

The challenges posed by AI in warfare cannot be fully addressed by the existing

provisions of IHL. The principle of distinction may be compromised by the

limitations of AWS in distinguishing civilians from combatants, while the

proportionality requirement may be difficult for AI systems to meet due to their

inability to make value-based judgments. Furthermore, the diffusion of

responsibility between humans and machines complicates the application of

accountability under IHL.

Possible Solutions:

  1. Meaningful Human Control: Ensuring that humans retain control over critical

decisions in warfare is a potential solution. This would require legal frameworks

mandating human oversight over AI systems, particularly when life-and-death

decisions are involved.

  1. New Legal Instruments: The international community could adopt a treaty

specifically regulating the use of AI in warfare. This treaty could outline the

permissible scope of AI’s use, establish accountability mechanisms, and clarify

the extent of human oversight required.

Conclusion

AI's use in warfare presents profound legal challenges under International

Humanitarian Law. While landmark cases such as Tadic and Nuclear Weapons

offer insights into the principles of distinction and proportionality, their

application to AI remains problematic. Autonomous weapon systems may fail to

comply with core IHL principles, and the issue of accountability remains

unresolved. To address these challenges, new legal frameworks may

through international treaties—must be developed to regulate the use of AI in

warfare and ensure adherence to IHL’s humanitarian goals.

References

Book

  1. Fleck, D., The Handbook of International Humanitarian Law, 3rd edn (Oxford

University Press, 2013).

Online Sources

  1. International Committee of the Red Cross (ICRC) - ICRC - Autonomous

Weapons

  1. United Nations Institute for Disarmament Research (UNIDIR) - UNIDIR - The

Weaponization of Increasingly Autonomous Technologies

Legal Cases

  1. Prosecutor v. Tadic (ICTY, 1999) IT-94-1-A.
  2. Nuclear Weapons Advisory Opinion (ICJ, 1996)
  3. Prosecutor v. Jean-Paul Akayesu (ICTR, 1998) ICTR-96-4-T.

Legislation

  1. Geneva Conventions (1949).
  2. Additional Protocol I to the Geneva Conventions (1977).
  3. Hague Regulations (1907).

Liked the article ?
Share this: