
Earlier this month, +972 Magazine and Local Call published the accounts of six Israeli intelligence officers claiming Israel was using an Artificial Intelligence (AI) system known as Lavender to assist in the selection of targets for military strikes in Gaza.
The Lavender system was allegedly designed to sift through mountains of data and select suspected Hamas and Palestinian Islamic Jihad (PIJ) operatives for potential military airstrikes. According to sources, the system had identified 37,000 potential targets linked to Hamas or PIJ.
Because there were so many potential targets at the beginning of the war, intelligence officers could not spend much time reviewing potential airstrikes for approval. This resulted in an essential rubber stamp for all of Lavender’s recommendations.
This rapid rate of decision-making is significant since officers knew that the AI system was prone to misclassifying 10% of cases, tagging individuals who only had a loose connection with Hamas or PIJ.
The result was that potential targets selected by Lavender were then tracked by another AI system known as “Where’s Daddy,” which used the geolocation feature of modern cell phones to confirm an individual’s location. The target was then tagged for airstrikes.
Israeli Defence Forces would ultimately drop bombs, leveling buildings and killing the targets. The benefit of this approach was the efficiency of confirming a target’s location.
However, the unfortunate side effect was that any civilians inside the structure would be killed or wounded. These strikes often occurred at night, so the target was usually at home.
This resulted in the target’s spouse, children, or unrelated civilians being killed as well. This has led to a disproportionate level of collateral damage, mostly women and children not involved in the fighting.
The report parallels a November statement from the Israeli military, which admitted to using yet another AI system known as “The Gospel” to select targets. The two systems take different approaches to target selection. Lavender is focused on targeting individual people who have Hamas or PIJ associations, while The Gospel looks for buildings and structures that militants potentially use.
Regardless of the +972 report and claims made by the six Israeli intelligence operatives, it is clear that we have been in a new era of warfare for some time. It would be foolish to assume Israel is the only nation to have and be willing to use such technology.
In 2020, potentially autonomous Turkish-made Kargu-2 attack drones were deployed in Libya. It is also well known that the Chinese have had the Skynet system, which uses facial recognition and tens of millions of cameras to track dissidents and potential criminals, since at least 2006.
The Geofeedia system has been used in the US to track unrest as far back as 2016. The extent to which AI monitoring and predictive systems are in the possession of American or Russian militaries is unknown.
Because AI is currently used for facial recognition, financial data analysis and even behavioral analysis, these systems can crunch billions of data points from all aspects of life in seconds, making law enforcement and the military more efficient. But what is the trade-off?
It is unclear how autonomous some of these systems can become. Even if the +972 report is exaggerated, Israel’s systems still have the potential to function autonomously.
This would include an AI algorithm selecting targets that are then automatically translated into orders sent out to other semi-autonomous systems or already deployed human forces. It could occur with minimal human approval.
The problem is not necessarily the use of algorithms, but lack of human involvement and reflection. Unlike computers, humans have emotions that help to keep us in check.
All decisions come at a cost. Even if the decision is justified or necessary, there is an emotional burden that weighs on the person making the decision.
This moral residue forces us to reflect on our role in decisions. When war becomes automatic or human intelligence officers rubber stamp an algorithm’s recommendation, a vital human component of war is removed from the decision-making process. War becomes cold and sterile, with no one taking emotional or personal accountability for the consequences of decisions.
In essence, the moral residue that forces us to reflect is eliminated. Decision makers potentially lose the ability to feel the sting of difficult decisions and not be forced to existentially question themselves or their core values.
This can lead to military thinking that only considers cost burden calculations. Collateral damage no longer feels real and life becomes cheap. The horror here is the anesthetization of the sting of war as it impacts the human soul.
This existential understanding of the potential consequences of military decisions has historically been the foundation of just war theory and the principle of proportionality. When military decisions become nothing more than calculations, the door is opened to treating enemy combatants and civilians as disposable.
It is time to begin asking about the human guard rails of these systems. How are these systems being used? How much human oversight is there?
We must not lose the human element in military decisions, which can potentially open the door for apocalyptic effects.


