A Kargu-2 was used in Libya during a March 2020 skirmish, according to a June 2021 NPR report that cited a United Nations Security Council briefing.

Turkish company STM makes the Kargu-2, a lethal autonomous weapons system (LAWS). The company describes the weapon as “a rotary wing attack drone that has been designed for asymmetric warfare or antiterrorist operations.”

According to the UN report, these systems were “programmed to attack targets without requiring data connectivity between the operator and the munition.”

Granted, it is unclear if the Kargu-2 was under the control of a human operator during these attacks or if it was operating as a LAWS, also known as an autonomous weapons system (AWS).

The U.S. Department of Defense defines an AWS as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator.”

We have seen such systems deployed in fixed defensive positions over the last decade, but we are now seeing them developed for offensive purposes.

Before discussing the ethics of these systems, we need to be clear about what these systems are because automatic and remote systems are often confused with autonomous systems.

An automatic weapon system is one that is set up and then activated by a triggering event. A good example is antipersonnel mines. These devices are set and when someone or something steps on them the munitions are activated.

While such systems are useful in a defensive position, they do have moral implications. Most notably, “What happens to the leftover mines after the conflict is over?” In 2016, 8,605 people were hurt or killed by leftover mines in dozens of countries.

A remote system is a device that is fully controlled by a human being from an alternate location. We think of these as drones or radio-controlled vehicles. The operator may or may not be in the theater of operations, but they remain in control of the device and decide what to do or not to do with its capabilities.

The Pioneer drones, which made headlines during the first Gulf War, are a classic example. But since 1991, remote systems have become commonplace on the battlefield.

While these systems help to save lives by keeping personnel out of harm’s way, they raise concerns about mitigating collateral damage for civilians who are often caught in the line of fire.

An autonomous system is different because it is a programmed device that functions based on the coding and algorithm created for it. There are no 100% AWS, since artificial intelligence has not gotten to the point where systems can write their own code.

Therefore, an AWS is forced to run a preset program and, like all programs, there are limitations. The program can evolve and add additional information, but the device cannot override or change the basic set of rules established by the programmer.

Therein lies the moral problem. Can an AWS perform the type of advanced moral reasoning needed to apply traditional concepts of engagement like just war theory?

Just war theory originated with St. Thomas Aquinas in the 13th century. While prior Christian and Arabic thinkers had discussed limits on combat and the possible justifications for war, it was Aquinas that articulated the classic expression of just war theory.

This expression has been the dominant framework for military ethics in the Christian and western traditions. His formulation of just war rests on six basic requirements:

1. The reason for the armed conflict must be a just cause.

2. Armed conflict should be a last resort measure.

3. The armed conflict is declared by a proper authority.

4. The proper authority and combatants have the right intentions.

5. There is a reasonable chance that the armed conflict will end current injustice or suffering.

6. The end results of the conflict must justify the means used (the principle of proportionality).

The question for ethicists is, “Is an AWS able to use these principles in real-time combat?”

If an eight-year-old is pointing a handgun, will that child be seen as a combatant or a kid who was enslaved and forced into armed servitude?

How can an AWS make a moral decision if terrorists are using innocent people as human shields?

How does an AWS differentiate between allies and adversaries? If two sets of competing armies are battling each other but the theaters of operation overlap, then who is the identified enemy?

These are questions that involve intuition and human reasoning but, more importantly, compassion.

While an AWS might be immune to the first four principles (1-4) of just war theory, they will always be plagued with the possibility, perhaps even the likelihood, of violating the last two (5 and 6).

An AWS cannot and should not focus on predicting the future. Nor can it decide if justice was ever served or how to measure suffering.

Both of these principles require a degree of compassion – something that cannot be programmed – and are probably the most important elements of the classical expression of just war theory. Without them, it is ridiculous to think of an AWS operating in a moral fashion.

While there may be a place for limited autonomous systems in modern warfare, the faith and moral community needs to take the lead and start asking difficult questions of our military leaders and the technologies being developed and implemented.

Over 30 countries, along with Amnesty International and Human Rights Watch, have expressed concern over the continued development of AWS and many are calling for an international ban.

All new weapon systems, from the sling shot to nuclear weapons, have created unforeseen consequences. AWS is no different.

They need to be rationally evaluated so as to mediate the suffering of civilians, which always accompanies armed conflict.

Share This