February 28, 2026. The First Day of the War. And the Worst Day in Minab's History.
The school was called Shajareh Tayyebeh. In the local language, that means The Good Tree. It was a girls' primary school in the town of Minab, in the south of Iran. On the morning of February 28, 2026, dozens of girls between the ages of 7 and 12 were sitting in their classrooms when the United States and Israel began military strikes across Iran.
Life in Minab that morning was almost normal. Children had gone to school. Cars were moving on the roads. Parents — hearing the war had started — had begun walking toward the school to bring their daughters home. Some of them were still on their way when the missiles hit.
Three separate missiles struck the same building. The roof collapsed onto the children inside. Iranian authorities recorded the final death toll at between 165 and 175 people. Most of them were girls who had come to school that morning to learn. It was the deadliest single strike on civilians in the entire war.
The School Had Been a School for Ten Years. The Military's Records Said Otherwise.
To understand how this happened, you need to understand where the school was located — because that is the heart of everything.
The school building stood next to an Iranian military compound. Years earlier, the compound and the school building were part of the same site. But in 2016 — a full ten years before the strike — a wall was built separating the school from the military area. The school got its own entrance from the public street. A children's playground was visible from the road. There were no military checkpoints. Anyone walking past, or looking at a satellite image, or doing a basic internet search, could see it was an ordinary school.
But the US military's targeting database — the official list of buildings and coordinates used to plan strikes — had never been updated. In that database, the school building was still recorded as part of the military compound. And so when officers planned their strikes on the first morning of the war, the school was on the list.
The preliminary military investigation — confirmed by CNN, the New York Times, NBC News, and others — found that US Central Command created the strike coordinates using outdated information provided by the Defense Intelligence Agency. That agency's records still listed the school building as part of a military target. Nobody had updated them in at least a decade.
Even a basic internet search would have shown the school. Satellite imagery available to anyone online clearly showed the separation since 2016. The information needed to prevent this strike existed. It simply was not in the system that the military used to make its decision.
Former military officers told reporters: had anyone identified the building as a school during the normal review process, it would have been removed from the target list immediately. That review did not catch it.
There is one detail that makes this even harder to understand. Investigators found that a medical clinic located between the military base and the school was not struck. The targeting system was sophisticated enough to identify the clinic as a protected place. But it was working from ten-year-old records — records that said the school next door was still a military building.
Everyone Blamed the AI. The Truth Is More Uncomfortable Than That.
In the days after the strike, one explanation spread quickly: artificial intelligence had targeted the school. The story felt logical on the surface. The US military had confirmed it was using AI during the Iran campaign. The strike happened at extraordinary speed. AI was involved. So AI must have caused this.
The actual investigation found something different — and in some ways, more troubling.
Journalists at Semafor, speaking to former military officials and people familiar with the operation, reported plainly: "The error was one that AI would not be likely to make." AI systems can process enormous amounts of current information — satellite images, public records, internet data — far faster than any human team. If the AI had been given accurate, up-to-date information, it likely would have flagged the difference between the 2016 map and the current reality. The problem was not the AI. The problem was that the humans responsible for keeping the database current had not done so in ten years.
What AI did do — and this is the part that matters — was accelerate the entire process. In the first eleven days of the war, the military carried out 5,500 strikes. That pace is only possible with AI processing data at a speed no human team could match. The satellite image showing the school intact was captured at 10:23 in the morning. By 10:45 — just 22 minutes later — three missiles had already hit it.
AI did not choose to hit a school. But AI-powered speed, fed with ten-year-old human data, compressed the window in which any person could have paused, questioned the target, and said: wait — is this still right?
It is really, really important when you are killing people that a human makes the life-and-death decisions. And they should be fully informed decisions — not based on unreliable information.
— Peter Bentley, Computer Scientist, University College LondonMore than 120 members of the US Congress wrote directly to the Defense Secretary asking: what role did AI play in selecting this target? What role did it play in the legal review that is supposed to happen before any strike? Those questions have not been answered publicly.
A Company Agreed to Help the Military. Then the Military Asked for Something They Could Not Give.
In July 2025 — six months before Minab — a company called Anthropic signed a $200 million deal with the US military. They let their AI be placed inside the military's most secure computer systems. Systems so locked down that even Anthropic itself could not reach in or change anything once their AI was running inside them.
Then the military came back and asked for one additional clause: the AI should be available for "any lawful purpose." To Anthropic, those three words meant specifically two things — weapons that decide to fire without a human making that individual call, and the silent mass monitoring of ordinary citizens.
Anthropic said no. Not to helping the military. To those two specific things.
The response: the President ordered all agencies to stop using Anthropic. Then the military officially labeled Anthropic a "supply chain risk to national security" — the first time in history that label had been used against an American company. Every company that works with the military and also uses Anthropic's AI now had to choose between the two. For most, that meant losing Anthropic. The financial damage runs into billions of dollars.
The Government Declared Them a Threat. Then Privately Said They Were Nearly Aligned.
On March 3, the Pentagon formally declared Anthropic a national security threat and made the label official.
On March 4 — the very next day — a senior Pentagon official sent a private email to Anthropic's CEO saying the two sides were "very close" on the exact issues that had just been cited as proof of the threat.
Two days later, that same official posted publicly that there were no negotiations at all. A week after that, he said there was no chance of any talks ever resuming.
You cannot privately say you are nearly aligned with someone — and simultaneously declare them a national security threat to the rest of the world. This email, now presented in court, is what lawyers call a devastating contradiction.
Anthropic also brought a technical argument. The government claimed they feared Anthropic might secretly switch off or alter their AI during a military operation. Their own technical team explained why this is impossible: once the AI is running inside those locked-down, internet-disconnected military systems, Anthropic cannot see anything, cannot change anything, and cannot disable anything. There is no remote access. The military's own security made that impossible.
Then a Competitor Walked In — And Said the Exact Same Things.
Hours after Anthropic was blacklisted, OpenAI — the company behind ChatGPT — announced a new deal with the Pentagon. They would fill the gap Anthropic left. Then OpenAI published what they had agreed to. And it included the exact same restrictions Anthropic had just been destroyed for holding.
No weapons that fire without a human deciding. No watching citizens at scale without knowledge. No life-or-death automated choices without a human accountably in control.
No autonomous weapons. No mass domestic surveillance. No high-stakes automated decisions without a human genuinely in control.
Even OpenAI's own CEO said his company's move looked "opportunistic and sloppy." The ethics were identical. The positions were identical. The only difference was that one company said no first — and was punished for it. A judge in San Francisco is deciding right now whether that punishment was legal.
Two Stories. One Question That Every Country on Earth Now Has to Answer.
The Minab school strike and the Anthropic blacklisting happened in the same week. Same technology. Same unresolved question. And that question belongs not just to America but to every country, every democracy, every ordinary person living under a government that will increasingly use AI in the decisions that affect the most serious things in life — including war.
The argument for using AI: Modern warfare generates more data than any human team can process. Signals, satellite images, communications, movement — the volume is simply too large. AI can help analysts make sense of it faster. In theory, better information could mean fewer mistakes.
The argument for serious caution: AI is only as good as what you feed it. If the information is wrong or outdated, the AI processes that error at full speed — and the humans who are supposed to catch it have less and less time to do so as the pace of operations increases. Minab happened in a 22-minute window. That is not enough time for careful, accountable human review.
Every target suggestion from AI is reviewed by a person who has time, current information, and the authority and courage to say no. The AI helps with analysis. A human makes the final call with real accountability. This is what Anthropic was insisting on — and was blacklisted for.
A human technically approves each strike, but the pace of operations means reviewing hundreds of targets in minutes, trusting AI outputs without real time to question them. Most military experts believe this is what is already happening. Minab is the result.
No government publicly admits to doing this. Anthropic's red line was specifically designed to prevent this. The Pentagon's demand for "any lawful purpose" was the first step toward making it possible.
These Are Not Two Separate Stories. They Are the Same Warning.
Anthropic's AI was reportedly being used in the Iran campaign — for intelligence analysis and target identification — at the time the Minab school was struck. The strike happened on February 28. The blacklisting came on March 3. Five days apart.
The company insisting that AI must not make lethal decisions without a human accountably in the loop was punished for that insistence — days after the clearest possible real-world demonstration of what happens when those safeguards are not in place.
The children of Minab died because a record was never updated. A company lost everything for saying: records like that are exactly why a human must always be the one to make the final call.
The world has not answered the question these two stories are asking together. There are no agreed international rules for AI in warfare. No international agreement on who is responsible when AI-assisted decisions kill the wrong people. No oversight structure that a technology this powerful — used at this speed — clearly requires.
Until those things exist, what happened in Minab will not be the last time. It will simply be the first documented case in a longer and more painful list.
The lesson of Minab is not that AI is dangerous. The lesson is that speed without accuracy, and power without accountability, is always dangerous — whether a human or a machine is behind it.
— The thread connecting both stories in this decode📬 Never Miss a Decode
90 days of honest, clear writing on the stories that shape the world. No jargon. No paywall. Just one decode a day.
Subscribe Free →