THE question

10/02/2025

Can The Terminator follow the laws of war?  

1. Distinction: Can AI tell who’s a combatant? 

The first principle is distinction: attackers must tell combatants from civilians, and military targets from civilian objects. Autonomous weapons can identify tanks or radars—but not human intent. A fighter in civilian clothes hiding among locals is far harder to spot. When AI misclassifies a civilian as a threat, the result is an indiscriminate attack—which is unlawful under IHL. 

2. Precaution: Can killer robots adapt to sudden change?  

Even when a target is lawful, attackers must take constant care to protect civilians. This means adjusting or cancelling attacks if the situation changes on the ground. 

And that’s exactly where lethal autonomous weapons systems (LAWS) struggle. What if civilians suddenly enter the area? What if visibility drops or GPS signals are jammed? Unlike human soldiers, machines can’t always adapt to complex, fast-changing conditions. 

To address this, some countries have begun building in layers of control. Since 2023, the U.S. Department of Defense has required autonomous systems to include “abort” or “suspend” functions when the risk to civilians becomes too high. In 2025, France told the UN’s Group of Governmental Experts (GGE) that human judgment must be involved early on—in how the system is designed, programmed, and deployed, with strict limits on where, when, and how it can be used. 

3. Proportionality: Can AI judge when harm outweighs gain? 

Even if a target is legal and precautions are taken, an attack is still unlawful if the expected harm to civilians would be excessive compared to the military advantage gained. 

This is the principle of proportionality—and it’s one of the hardest for machines to apply. Why? Because it requires human judgment in complex, real-time contexts. There’s no simple formula to weigh lives lost against a military objective. 

Some experts warn that we shouldn’t trust systems that can’t explain how they make decisions. The UN Institute for Disarmament Research (UNIDIR) has called for banning “non-interpretable” AI—so-called black box systems that make calculations we can’t understand or audit. 

Field tests by the Stockholm International Peace Research Institute (SIPRI) in 2024 showed the stakes clearly: even a small miscalculation in estimating civilian presence could tip an otherwise legal strike into illegality.  Put simply: when the margin for error is tiny, the cost of trusting autonomous systems may be too high. 

4. Unnecessary suffering: Can machines show restraint? 

International humanitarian law (IHL) bans weapons that are designed to cause superfluous injury or unnecessary suffering. In short, force must be limited to what’s strictly necessary to achieve a military goal. 

An autonomous system that always chooses to kill—even when it could capture or disable a target—crosses that line. Inflicting greater harm than needed isn’t just unethical; it’s illegal. 

That’s why experts emphasize a key design requirement: these systems must allow for scalable use of force—for example, choosing between warning, disabling, or lethal action—and must be able to reverse or suspend attacks if the situation changes. If a machine can’t do that, it may violate this rule of war by default.

In sum 

A Terminator-like machine could only comply with IHL under very strict conditions: operating in controlled environments, under constant human supervision, with clear limits on use, the ability to pause or stop attacks, and transparent decision-making. Without these safeguards, unpredictability and lack of accountability would make such weapons unlawful. In short: the more humans remain in control, the more these systems can fit within the laws of war. Without that control, the cyborg is an outlaw. 


This debate goes beyond law into what it means to be human. My colleague at the SKEMA Centre for Artificial Intelligence, Margherita Pagani, shared insights inspired by Valentina Pitardi, who notes—and this would be especially desirable in a Cameronian context—that people are more likely to obey a sign than a robot

Meanwhile, Professor Vinod Aggarwal reminds us that AI doesn’t just challenge laws—it challenges the very foundations of what it means to be human

This dilemma even echoes in science fiction beyond The Terminator. Take Darth Vader, for example—a man kept alive by machines. Could he have claimed self-defense when he killed the Emperor to save his son? This podcast will tell you.

Going further

Pr. Valentina Pitardi: “People would rather obey a sign than a robot”

It’s called “interactional justice”

Le professeur Vinod Aggarwal présente ses recherches devant le public réuni à SKEMA.
Vinod Aggarwal: “AI is a unique technology, it challenges the core of humanity”

Should innovation be regulated?

PODCAST – Could Darth Vader have claimed self-defense?

It’s THE question no one has EVER thought to ask

Share