The automated war machine goes into hyperdrive
• 3 min read
TL;DR: The war in Iran isn't the first conflict to use AI—but it's the first where AI seems to be essentially running the show. The Pentagon’s Maven Smart System helped pick and prioritize around 1,000 targets in the first 24 hours of the US campaign in Iran, a pace other conflicts haven’t yet matched. As AI takes on more of the planning and decision-making, some say human oversight seems to be shrinking to a mere rubber stamp.
What happened: The blistering pace and scale of targeting on the first day of “Operation Epic Fury” was enabled by the Maven Smart System, built by Palantir and (still) powered by Claude—a bit awkward, considering the Pentagon labeled the firm a "supply chain risk" for refusing to give it carte blanche to use Claude. (Anthropic sued the Pentagon for it yesterday.) Maven doesn’t launch the weapons, but it takes a host of data—like satellite imagery and drone footage—to create a prioritized list of targets that lands on a commander's desk. During the 2003 Iraq invasion, this kind of target identification required a team of 2,000 intelligence analysts. In Iran, AI apparently helps reduce that to about 20 people. And it doesn’t just help plan operations—experts say it also matches military units to specific missions similar to the way Uber matches passengers with drivers.
The AI fog machine: In Ukraine, AI has powered drone navigation. In Gaza, Israel used several AI systems, including one that the former chief of staff of the Israel Defense Forces said could generate around 100 targets per day, compared with about 50 targets per year before the system was introduced. The more that AI handles, the less of a role humans play in warfare, reducing target decision-making that used to take hours or days into seconds. And in Iran, there are concerns Maven's accuracy is extremely shaky. According to testing by the US military, as of 2024, it correctly identified objects—not mistaking a truck for a tree, for example—at about a 60% rate (human analysts aren’t perfect either; they had about 84% accuracy).
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.
Researchers have already warned about the "cognitive offloading" that occurs when military officials punt so much of the decision-making process to an algorithm. They say that when AI handles the analytical work long enough, operators become less capable of catching its mistakes.
The bottom line: AI showed up in Gaza and Ukraine. In Iran, it seems to be running the campaign. Analysts note that the ethical concern isn't just the speed of violence that AI makes possible, but that it allows humans to defer more decisions to tech—potentially diffusing responsibility for actions that can have devastating consequences. —WK
About the author
Whizy Kim
Whizy is a writer for Tech Brew, covering all the ways tech intersects with our lives.
Tech news that makes sense of your fast-moving world.
Tech Brew breaks down the biggest tech news, emerging innovations, workplace tools, and cultural trends so you can understand what's new and why it matters.