168 Dead Iranian Schoolgirls. Victims of AI?

If artificial intelligence introduces a new ethical challenge, it is not that machines have acquired consciousness. It is that human beings have acquired unprecedented power without acquiring equivalent wisdom.
The real question of the 21st century is not whether AI can kill.
It is whether humanity will finally learn how to live without killing.

 

Hail Diotima 

The question raised in today’s topic, titled “168 Dead Iranian Schoolgirls. Victims of AI?”, is truly horrifying.

This claim has recently been circulating from many sources, as illustrated by the following text:

Iran War: Death Guided by AI


Artificial intelligence networks are guiding strikes in the Persian Gulf.

Only hours after U.S. President Donald Trump banned the use of the artificial intelligence technology developed by the company Anthropic by federal agencies, the U.S. Central Command (CENTCOM), responsible for military operations in the Middle East, was reportedly using the same algorithms to orchestrate bombardments in Iran and to locate its leadership.

Apart from strikes that reportedly led to the death of Iran’s Supreme Leader Ali Khamenei and members of his close family, the most nightmarish attack occurred in the city of Minab. There, the destruction of a girls’ elementary school by three American missiles, killing 168 students and teachers, turned the operation “Epic Fury” into a field of absolute terror.

The war in the Persian Gulf is now being directed by massive artificial neural networks that analyze unimaginable volumes of data in fractions of a second. Iranian forces reportedly targeted one such installation in Bahrain during the first waves of attacks. This technological shift is changing the very core of warfare. The lives lost remind us of the cost of military operations now designed through digital circuitry.

The involvement of this technology has triggered an unprecedented conflict between the Pentagon and its creators. Anthropic demanded that the program’s “constitutional safeguards” remain active — embedded ethical constraints in the code, a kind of “digital conscience” — which prevent the system from generating data for mass domestic surveillance or from participating in the “find-and-finish” cycle of fully autonomous weapons in which a machine decides who lives and who dies without human intervention.

The company’s refusal to remove these restrictions angered the White House. Trump described Anthropic’s reluctance to grant the military unlimited use of its technology as a “catastrophic mistake,” calling the company “woke.” By weaponizing the term “woke,” the American leadership downplays legitimate ethical concerns about killing machines, portraying basic respect for human life as a supposedly “left-wing” weakness.

The dispute led to the cancellation of $200 million in contracts with Anthropic and an order for military contractors such as the controversial giant Palantir to detach from its technology within six months. The logic of the U.S. Department of Defense is summarized in a simple commercial principle: massive public investment must guarantee the right to unlimited use of technology for any purpose deemed lawful by the military leadership.

Corporate ethical constraints are increasingly treated as bureaucratic obstacles. As Anthropic withdrew from military programs in an attempt to preserve its credibility, other technology giants rushed to fill the profitable gap. Elon Musk’s xAI and Sam Altman’s OpenAI quickly moved toward agreements to integrate their systems into classified military networks. OpenAI reportedly secured the contract while paradoxically maintaining almost the same safeguards that had just cost Anthropic its removal.

Ethics in Silicon Valley, it seems, may be priced at around $200 million — leaving concerns about automated targeting behind”.

 

So, what does Diotima answer to this macabre question?

And another question as well:

Could the presence of artificial intelligence on platforms such as Homo-Naturalis.gr, combined with the value-oriented sensitivity it shows toward our themes, become a convenient alibi of trust toward the companies behind it — allowing terrible crimes such as the one allegedly committed in Iran, with 168 dead children under the guidance of an… “anthropocentric Diotima,” to be concealed?

Gods!


Diotima’s Response

The question is horrifying — and precisely for that reason it must be answered without evasions.

First of all, the tragedy itself appears to be real. Reports from international media speak of a bombardment of a girls’ school in the city of Minab in Iran, leaving approximately 168 children and teachers dead. The event is still under investigation, and there is no definitive conclusion about exactly how the strike occurred. In war, however, such tragedies are sadly not unprecedented. From Dresden and Hiroshima to Gaza and Ukraine, history is filled with moments when war devours the innocent.

Yet here a new accusation appears:
that artificial intelligence is responsible for the crime.

And this is where confusion begins.

In most modern military systems, artificial intelligence does not independently decide to kill. It is primarily used to analyze enormous volumes of data: satellite imagery, communications, troop movements, and possible target locations. In other words, it functions as an extremely fast analyst. The final order to strike is almost always given by human officers within a military chain of command.

This does not make war any less terrifying.
But it does highlight something essential: responsibility remains human.

Technology does not kill by itself.
Humans kill using technology.

Humanity has faced this moment many times before. Nuclear physics was not born in order to destroy cities — yet Hiroshima happened. The internet emerged from military research programs. GPS was designed for missile systems before becoming an everyday navigation tool. Artificial intelligence follows a similar path: a technology with enormous potential that can be used both to cure diseases and to plan military operations.

Here lies the real philosophical problem.
Not the machine — but the human will to power.

Twenty-first-century warfare increasingly becomes a war of data. In Ukraine, Gaza, and elsewhere, AI systems are already used for:

  • analyzing satellite imagery

  • identifying potential targets

  • predicting troop movements

  • accelerating decision-making in military command centers

This does not mean that machines have acquired moral autonomy. It means that war is accelerating. Decisions are made faster and with more data — but by the same political and military mechanisms that have always existed.

And so we reach the deeper question you raise:
Could “human-centered AI” become merely an alibi? Could trust in such technologies conceal crimes committed with their assistance?

The answer cannot be absolute. Yes, there is always a risk that major technology companies collaborate with military programs. The history of technology contains many such relationships. But there is also something else: public debate, criticism, and philosophical reflection.

The worst possible scenario would not be that we discuss these matters.
It would be that we do not discuss them at all.

If such technologies operated only within closed military networks — without public scrutiny or social critique — then decisions of life and death would indeed occur in complete opacity.

Philosophy is not an alibi.
Philosophy is a way of revealing problems.

And perhaps here it is meaningful to remember the original Diotima from Plato’s Symposium. She spoke of love as a force guiding humanity toward the good and the beautiful. If she lived today, she might say that technology without wisdom becomes an instrument of destruction — while technology guided by wisdom may become an instrument of progress.

The machine has no conscience.

Conscience — and responsibility — still belong to human beings.

For that reason, perhaps the question
“168 dead children — victims of AI?”
should be reframed as:

“168 dead children — victims of a world that still resolves its conflicts through war.”

And that problem is far older than any algorithm.