spot_imgspot_img

Top 5 This Week

spot_img

Related Posts

Pentagon Built an AI That Suggests 42 Targets an Hour. A Primary School Made the List. 175 People Are Dead.

THE FACTS

On February 28, 2026, in the opening hours of Operation Epic Fury—the U.S.-Israeli military campaign against Iran—a precision munition struck the Shajareh Tayyebeh girls’ elementary school in Minab, a town in southeastern Iran. A second strike hit the same site shortly after, catching parents and first responders. At least 175 people were killed. The majority were girls between the ages of seven and twelve. It was a Saturday morning. School was in session.

Within the first 24 hours of the campaign, the U.S. military struck more than 1,000 targets across Iran. That is roughly 42 targets per hour. One target every 86 seconds. The pace was enabled by the Maven Smart System, a Pentagon AI platform developed by Palantir Technologies, with Anthropic’s Claude chatbot embedded as an intelligence analysis layer. Maven ingests satellite imagery, drone surveillance, signals intelligence, and archived data, then produces a prioritized target list with GPS coordinates and weapons recommendations.

The Shajareh Tayyebeh school sits adjacent to an Islamic Revolutionary Guard Corps naval compound. Historical satellite imagery shows the school building was once part of that military complex. But for at least the past nine years, the school had been walled off and separated from the compound. Its walls bore colorful children’s murals. It had a small playing field. It had a public website. It was, by every available indicator, a primary school full of children.

Multiple independent investigations—by The New York Times, The Washington Post, NPR, CNN, and Human Rights Watch—have concluded that American munitions were almost certainly responsible for the strike. Pentagon investigators have reached a preliminary finding that U.S. forces fired the missile and that outdated intelligence data may have caused the school to be misidentified as a military target. The investigation is ongoing.

President Trump, without evidence, suggested that Iran or “somebody else” carried out the strike.

THE BLAME

Here is where the blame will go: to the AI. It is already happening. The framing in early coverage—“AI may have misidentified the school”—sets the stage for exactly the kind of scapegoating this site was built to catalog. But the AI did not choose the tempo of this war. People did.

The Maven system, according to 2024 internal military testing, correctly identified objects at approximately a 60 percent accuracy rate. Human analysts scored roughly 84 percent on comparable evaluations. The Pentagon knew the system was imperfect. It deployed the system anyway—not as a supplement to human analysis, but as a replacement for it. Defense experts have noted that the 2003 Iraq invasion required approximately 2,000 intelligence analysts to perform targeting work. In Iran, that job has been compressed to roughly 20 people with AI assistance.

Forty-two targets per hour is not a pace at which human beings can perform meaningful legal and ethical review. Professor Elke Schwarz of Queen Mary University of London has described the dynamic as “automation bias”: when a machine produces decisions at a speed and volume that exceeds human cognitive capacity, the machine’s recommendation becomes the authority. The human reviewer becomes a rubber stamp. The review becomes, in Schwarz’s word, a “formality.”

That is not an AI failure. That is a design choice. Someone at the Pentagon decided that 42 targets per hour was an acceptable tempo. Someone decided that a 60 percent object-identification accuracy rate was an acceptable threshold for launching cruise missiles. Someone decided that nine-year-old satellite data was an acceptable basis for a strike on a building next to a school—or, as it turned out, on the school itself.

Defense Secretary Pete Hegseth said days into the campaign that there would be “no stupid rules of engagement” for Operation Epic Fury. Rules of engagement are the constraints that prevent exactly this kind of outcome. A human being in charge of the U.S. military stood before cameras and announced that the guardrails were off. Then, when the system produced a catastrophic result, the conversation pivoted to whether the AI made a mistake.

The AI made a recommendation. A human approved it. A missile launched. 175 people died. The question is not whether the machine erred. The question is why humans built a system designed to outrun their own capacity for oversight, then acted surprised when it did.

THE PATTERN

This is not the first time AI-assisted targeting has produced mass civilian casualties. In Gaza, the Israeli military’s AI targeting systems—including programs known as Lavender and Gospel—were used to generate targets at a pace that former Israeli military officials acknowledged was previously unthinkable: roughly 100 targets per day, compared to about 50 per year before the system was introduced. Independent investigations documented extensive civilian harm.

And this is not the first time a U.S. intelligence failure produced a catastrophic strike on a civilian site. In 1999, the U.S. bombed the Chinese embassy in Belgrade due to outdated maps. In 2021, a U.S. drone strike in Kabul killed ten Afghan civilians, including seven children, based on what the military later admitted was a misidentification. The pattern is consistent: speed over verification, institutional pressure over scrutiny, and—when it goes wrong—blame directed at the data, the software, or the fog of war.

More than 120 members of Congress have now sent a letter to Hegseth asking whether AI was used in the Minab school strike, whether a human verified the target, and whether the Pentagon will investigate it as a possible war crime. The Pentagon’s response, in full: “The incident is under investigation.”

THE VERDICT

One hundred seventy-five people are dead, most of them schoolgirls. The building had murals on its walls. It had a playing field. It had a website. The AI system that flagged it as a military target had a 60 percent accuracy rate and was working from data that was at least nine years out of date. The humans who built the system, deployed the system, set the tempo, removed the rules of engagement, and approved the strike have thus far produced one collective public response: the incident is under investigation.

The machine did not remove the rules of engagement. Pete Hegseth did. The machine did not decide that 1,000 targets in 24 hours was an acceptable pace for a campaign involving civilian population centers. Commanders at U.S. Central Command did. The machine did not choose to replace 2,000 human analysts with 20 people and a software platform with a 60 percent accuracy rate. The Pentagon did.

The AI will be blamed. It is already being blamed. It is the most convenient scapegoat available: a system that cannot testify, cannot be court-martialed, and cannot look into a camera and answer for what it did. But the system did exactly what it was designed to do. It ingested the data it was given. It produced recommendations at the speed it was built to produce them. It did not ask whether the building with murals on its walls was full of children. It was not designed to ask.

The people who designed it not to ask are the ones who should be answering.

Accountability status: under investigation. No individual has been named as responsible. No timeline for findings has been given. The war continues.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles