Pentagon deploys banned Anthropic AI to coordinate massive machine-speed strikes against Iran

Operation Epic Fury unleashes machine-speed warfare in Iran, revealing a lethal dependency on the very AI the government banned.

March 4, 2026

Pentagon deploys banned Anthropic AI to coordinate massive machine-speed strikes against Iran
The start of the military campaign against Iran, designated Operation Epic Fury, has marked a definitive shift in the history of warfare as the United States military deployed generative artificial intelligence at an unprecedented scale to coordinate its opening salvos. In a barrage of strikes that commenced late last week, U.S. and Israeli forces targeted approximately 1,000 sites across Iran within the first 24 hours.[1] Central to this rapid-fire execution was Anthropic’s Claude, a large language model that has become the cognitive engine of the Pentagon’s target selection and strike planning.[2][3][4] By integrating Claude into the Maven Smart System—a sophisticated data-fusion platform built by Palantir—military planners were able to compress complex intelligence analysis and legal reviews that typically take weeks into a matter of minutes.[1] This transition to machine-speed warfare allowed for the simultaneous decapitation of Iranian leadership and the neutralization of critical air defenses, reportedly resulting in the death of Supreme Leader Ayatollah Ali Khamenei.[3]
The technological infrastructure enabling these strikes represents a significant leap in how the Department of Defense processes the modern battlefield's "data deluge." The Maven Smart System leverages Claude to synthesize vast quantities of classified data, including satellite imagery, signals intelligence, and real-time drone feeds. According to internal reports, the AI does more than just identify potential targets; it generates prioritized "targeting packages" that include precise coordinates, recommendations for specific munitions based on current stockpiles, and even automated reasoning to assess the legality of a strike under international humanitarian law. This capability has fundamentally altered the role of human personnel. A recent study by Georgetown University focusing on the Army’s 18th Airborne Corps found that the implementation of these AI tools allowed a small team of just 20 people to perform the logistical and analytical work previously required of a 2,000-member staff.[1]
However, the military’s reliance on Anthropic’s technology has created an extraordinary political paradox in Washington.[1] Just hours before the first missiles were launched, the administration issued an executive order effectively banning Anthropic from all government systems and designating the San Francisco-based startup a supply-chain risk to national security. This drastic move followed a months-long standoff between the Pentagon and Anthropic leadership over the company's "Constitutional AI" guardrails. Anthropic CEO Dario Amodei had steadfastly refused to remove safety protocols that prohibited the use of Claude for domestic mass surveillance or for fully autonomous lethal weapon systems where a human is not in the decision-making loop. In response, Defense Secretary Pete Hegseth accused the company of ideological arrogance, stating that American warfighters should not be held hostage by the whims of Silicon Valley executives.[5] Despite this formal ban, the military has continued to use Claude in the Iran campaign, citing the technical impossibility of "untangling" the model from the Pentagon’s classified networks on the eve of a major conflict.
The ethical and humanitarian consequences of this "decision compression" have quickly come under international scrutiny. While the speed of the AI-driven strikes was credited with preventing an initial Iranian counter-offensive, the system has already been linked to high-profile failures. A missile strike in Southern Iran, which reportedly hit an elementary school near a military barracks, resulted in the deaths of 165 people, many of them children.[3] The United Nations has characterized the incident as a grave violation of humanitarian law, prompting concerns from researchers about "cognitive off-loading." This phenomenon occurs when human commanders, overwhelmed by the volume and speed of AI-generated recommendations, begin to rubber-stamp the machine’s output without the critical skepticism required for life-and-death decisions. Experts warn that when the kill chain is shortened to seconds, the human "in the loop" becomes a mere formality, detached from the moral and legal consequences of the automated planning.
For the broader AI industry, the conflict in Iran has catalyzed a seismic shift in corporate strategy and government relations.[5] As Anthropic faces a forced six-month phase-out from federal contracts, its primary competitors have moved aggressively to fill the vacuum.[1] OpenAI recently signed a new agreement with the Pentagon to provide its models for classified military networks, signaling a pivot away from the strict safety-first alignment that once defined the sector's relationship with the defense establishment. In Beijing, the visibility of the Pentagon's AI success has intensified a national drive for technological self-reliance, as Chinese analysts describe the militarization of Western AI as a "wake-up call" for the global balance of power. The conflict has also exposed the physical vulnerabilities of the digital infrastructure supporting these models; recent retaliatory strikes by Iran against data centers in the United Arab Emirates and Bahrain have underscored that the "cloud" is a tangible, and targetable, military asset.
As the war enters its second week, the legacy of Claude’s role in Operation Epic Fury remains a subject of intense debate.[4] The Pentagon has demonstrated that generative AI can successfully manage the complexity of a modern, multi-domain conflict with terrifying efficiency, yet the tension between the military's operational needs and the ethical boundaries set by AI developers is far from resolved. The administration's attempt to ban a company while simultaneously relying on its software for a primary war effort highlights the deep-seated dependency the U.S. government now has on commercial AI providers.[2][5][6] This moment marks the end of the era of theoretical AI ethics and the beginning of a period where the "speed of thought" provided by machines will dictate the outcome of geopolitical struggles, regardless of the legal and moral frameworks currently in place.

Sources
Share this article