How AI and Automated Systems Are Changing Modern Warfare: Legal Concerns in Gaza

Military and Defense

[Disclaimer] This article is reconstructed based on information from external sources. Please verify the original source before referring to this content.

News Summary

The following content was published online. A translated summary is presented below. See the source for details.

Legal scholar Khalil Dewan has raised concerns about the use of artificial intelligence and automated systems in military operations in Gaza. In a detailed interview, Dewan explains how AI-powered surveillance systems can track thousands of people simultaneously, analyze patterns of movement, and identify potential targets without human oversight. He discusses reports of systems that can generate target lists faster than humans can review them, leading to decisions about life and death being partially automated. Dewan emphasizes that international humanitarian law requires human judgment in military decisions, especially those involving civilian harm. He warns that the speed and scale of AI systems may be outpacing legal frameworks designed to protect civilians in conflict zones. The conversation highlights growing concerns among legal experts about accountability when algorithms contribute to military decisions and the need for updated international laws to address AI warfare.

Source: UntoldMag via Global Voices

Our Commentary

Background and Context

Background and Context illustration

Artificial Intelligence (AI) in military operations isn’t science fiction anymore – it’s happening now. AI systems can process vast amounts of data from cameras, drones, phones, and satellites to identify patterns humans might miss. In conflict zones, these systems are being used to track movements, predict behaviors, and identify potential military targets.

International humanitarian law, also known as the laws of war, has existed for over 150 years. These laws require militaries to distinguish between combatants and civilians, use proportional force, and take precautions to minimize civilian harm. However, these laws were written when humans made all military decisions. The introduction of AI creates new challenges because machines can now influence or even make decisions that previously required human judgment.

Expert Analysis

Legal scholars like Khalil Dewan worry about several key issues with AI warfare. First is the speed problem: AI can generate thousands of potential targets in minutes, but reviewing each one properly takes much longer. This creates pressure to trust the AI’s recommendations without adequate human review.

Second is the accountability problem: If an AI system contributes to a decision that harms civilians, who is responsible? The programmer who wrote the code? The commander who approved using the system? The soldier who acted on its recommendation? Current laws don’t clearly answer these questions.

Third is the transparency problem: Many AI systems are “black boxes” – even their creators don’t fully understand how they reach specific conclusions. This makes it difficult to challenge decisions or learn from mistakes.

Additional Data and Fact Reinforcement

Modern AI surveillance systems can:

• Track thousands of individuals simultaneously across multiple cameras

• Analyze communication patterns from phone and internet data

• Predict likely movements based on historical behavior patterns

• Generate target recommendations in seconds or minutes

• Operate 24/7 without fatigue, unlike human analysts

The Geneva Conventions, the main international laws governing warfare, require that targeting decisions include human judgment about factors like:

• Whether a target is military or civilian

• Expected civilian harm versus military advantage

• Available precautions to minimize civilian casualties

• Timing and methods of attack

Related News

The United Nations has begun discussions about regulating autonomous weapons systems, with some countries calling for a complete ban on “killer robots” that can select and attack targets without human control. The International Committee of the Red Cross has published guidelines emphasizing that humans must retain control over life-and-death decisions.

Tech companies face increasing pressure about military uses of their AI technology. Some employees at major tech firms have protested contracts with military organizations, arguing their work shouldn’t contribute to warfare. This has sparked debates about the responsibility of technologists in how their creations are used.

Summary

Summary illustration

The use of AI in warfare represents one of the most significant changes in how conflicts are fought, raising fundamental questions about human control, accountability, and protection of civilians. While AI can potentially make military operations more precise, it also risks removing human judgment from life-and-death decisions. Legal scholars like Khalil Dewan argue that international law must evolve quickly to address these new technologies before they become normalized in warfare. The challenge is ensuring that efficiency gains from AI don’t come at the cost of humanitarian protections that have taken centuries to establish.

Public Reaction

Human rights organizations have expressed alarm about AI warfare, with Amnesty International and Human Rights Watch calling for strict regulations. Tech workers have organized petitions against military AI projects, while some argue that AI could actually reduce civilian casualties if used properly. Military officials from various countries defend AI as a necessary tool in modern conflicts, creating a complex debate about technology, ethics, and security.

Frequently Asked Questions

Q: What exactly is AI surveillance?
A: AI surveillance uses computer programs to automatically analyze video feeds, communications, and other data to identify patterns or specific individuals. Unlike traditional surveillance where humans watch screens, AI can monitor thousands of feeds simultaneously and alert operators to anything it considers important.

Q: How is AI different from drones or other military technology?
A: While drones are tools controlled by humans, AI systems can analyze information and make recommendations or decisions on their own. It’s like the difference between a car (which you drive) and a self-driving car (which makes decisions about where to go). The concern is about how much decision-making power we give to machines in warfare.

Q: Why can’t we just program AI to follow the laws of war?
A: The laws of war often require complex judgments about context, intention, and proportionality that are difficult to translate into computer code. For example, determining if someone is a civilian or combatant might depend on subtle factors that change with circumstances. AI struggles with this kind of nuanced decision-making that humans do naturally.

タイトルとURLをコピーしました