Feature: Why are so many civilians killed in the era of AI assisted targeting?
On Saturday, US-Israel air-strikes left 165 people dead and dozens wounded at a girls’ school in Minab, following escalating tensions over Iran’s nuclear enrichment programme. This was one sprawling tragedy amid a colossal offensive.
While the human intentionality behind the attack has not been confirmed or denied, Israel said it completed its ‘biggest air force operation in the country’s history, with about 200 fighter jets hitting about 500 targets in western and central Iran.’ Donald Trump said that the scope of the ‘US-led operation’ seeks to simultaneously eliminate Iran’s nuclear missile programmes, destroy its Navy and change the country’s leadership.
The girls’ primary school, just 62 meters from an Iranian military base, was described by an Iranian official as being ‘targeted by three missile attacks.’ Targeting civilians in conflict is a crime under international human rights law, as is ‘using military force against other countries’ in acts of unprovoked aggression – a fact the US is newly sensible to following criticisms of its Venezuela campaign in January.
But in an era of AI assisted missile targeting, why are so many civilians being killed?
Last week Anthropic, a US Department of Defence supplier, refused to agree to a waiver that the government could, in principle, use its AI tools for ‘mass domestic observation’ and ‘fully autonomous weapons,’ thereby prizing open the public debate around the ethics of lethal autonomous weapons (LAWS) such as missiles and drones. Anthropic CEO Dario Amodei asserted in a follow-up Youtube video that any such agreement would be ‘crossing red lines’ and ‘was contrary to American values.’
This dilemma feeds deeply into our established, cultural narratives, from the nefarious antics of the Hal computer in Stanley Kubric’s film 2001 Space Odyssey, to a civilisation made subordinate to technology in Aldous Huxley’s A Brave New World. You would be forgiven for assuming that the use of AI in identifying and launching missiles eliminates the intervention of moral, or at least, sanctioned and sentient human decisions.
Currently, neither the US, Israel, nor Russia and Ukraine for example, are using ‘fully autonomous’ missiles, although the testing and use of increasingly autonomous drones is widely reported in the Russia-Ukraine conflict. Regardless, it feels important for the public to be educated on the automated-to-autonomous weapons trajectory that governments are navigating to defend their people.
For example, the term ‘in the loop’ refers to AI in which a human operator must initiate and approve any targeting or combat engagement decision. ‘On the loop’ means that human supervision and, potentially, intervention is required. ‘Out of the loop’ is when an AI system has complete independence and autonomy once triggered.
According to UAE based think-tank, Trends Research and Advisory,** Israel Aerospace Industries’ ‘Harpy’ – which combines the features of an unmanned aerial vehicle and a missile – is a clear example of out of the loop autonomy. The US uses machine learning (a subset of AI) in Project Maven to analyse drone and surveillance footage in order to flag potential objects of interest, and the IDF uses systems like ‘Lavender’ and ‘The Gospel’ to rapidly generate target recommendations for air strikes.
These systems absorb a humanly incomprehensible quantity of environmental information to define targets – they adapt to variations such as heat and speed and even enemy electronic countermeasures, designed to confuse intelligence systems. In this sense, the sophistication of contemporary targeting AI delivers a level of accuracy that previous ‘classical AI’ could not. Trends says:
‘This is an improvement over automated machines because autonomy might be able to tell the difference between civilians and combatants and stop targeting civilians by mistake.’
By this rationale, the positive differences between the machinery of classic warfare and semi-autonomous AI also sit at the centre of the moral, and legal debate. But while AI assistance may be capable of saving more lives, it also generates more possible targets for lethal aggression…at speed.
Yet this next-gen technology is not impervious to making mistakes. It is considered more vulnerable than its predecessors to cyber-attacks that could compromise systems, communications, and whole missions. Moreover, like people, a missile’s sensors and ‘seekers’ can be highly affected by weather conditions and noise, as well as ‘spoof’ signals from enemies.
Probes into the myriad reasons for tragedies such as the Minab school strike/s are therefore not just desired but fundamental to controlling how fast-evolving combat AI is harnessed. Ernesto Damiani writes in the Italian Institute for International Political Studies:
‘In the absence of specific multilateral regulations, the development and use of potentially lethal autonomous attack systems could create for the countries that produce and adopt them humanitarian, legal, and ethical controversies.’
In the UN Agenda for Peace, UN Secretary-General António Guterres recommended in 2023 that member states develop a legally binding instrument by 2026 to ban lethal weapon systems that operate without human control or supervision.’ Negotiations are expected to take place next year…
Here in the UK, the government maintains that existing International Humanitarian Law (IHL) provides a sufficient framework for control, as a new treaty might prohibit ‘undefined’ and (presumably) as yet undeveloped systems. Meanwhile the MOD has indicated that lethal autonomous weapon systems should be embraced to support compliance with (IHL), demonstrating a commitment to research and development that intensifies a need for policy makers and the public to be kept ‘in the loop.’
Another, more accessible AI tool, ChatGPT, informs me that ‘Errors in intelligence or identification — not technical guidance — are often the critical factor when civilians are killed.’
But is one application to be trusted?
TF
** Trends is affiliated with UAE government agencies.


