Pentagon demands Anthropic remove Claude AI safety guardrails by Friday or face federal seizure
The Department of War demands Anthropic remove restrictions on mass surveillance and lethal targeting or face federal seizure by Friday.
February 25, 2026

The tension between Silicon Valley’s ethical ambitions and the operational demands of the American military has reached a critical flashpoint as the Pentagon issues a high-stakes ultimatum to Anthropic.[1][2] In a move that represents an unprecedented escalation in the government’s attempt to harness frontier artificial intelligence, the Department of Defense—recently rebranded by executive order as the Department of War—has given the AI startup until five o'clock this Friday to remove internal restrictions on its flagship model, Claude. At the heart of the dispute is Anthropic’s refusal to allow its technology to be used for mass surveillance of American citizens or for the control of fully autonomous weapons systems that can make lethal targeting decisions without a human in the loop.[3][1][4][5] Should the company fail to comply by the deadline, the administration has threatened to invoke the Defense Production Act to seize control of the technology or designate the company as a supply chain risk, a move that would effectively blackball it from the federal marketplace and potentially cripple its business operations.
The standoff follows a closed-door meeting at the Pentagon between Anthropic CEO Dario Amodei and Secretary of War Pete Hegseth, where the two leaders reportedly clashed over the fundamental nature of AI guardrails. Anthropic has long positioned itself as a "safety-first" organization, pioneered by its Constitutional AI framework which embeds a specific set of values into the model’s core training. However, the Pentagon’s leadership has made it clear that they view these built-in ethical filters as an unacceptable hurdle to national security. Secretary Hegseth has argued that when the government purchases a high-tech asset, whether it is a fighter jet from Boeing or a software license from a startup, the manufacturer should not have the authority to dictate how that asset is deployed in the field of battle. This comparison highlights a shift in the administration's philosophy toward "non-woke" AI—systems designed to maximize combat effectiveness and strategic dominance without what officials describe as ideological constraints.[6]
The threat to use the Defense Production Act (DPA) marks a significant evolution in the law's application.[7] Originally designed during the Cold War to ensure the production of physical goods like steel and aerospace components, the DPA is now being brandished as a tool to compel the delivery of "dual-use" code and compute. By framing Anthropic’s refusal as a threat to the national defense supply chain, the government is asserting that the private sector’s ethical preferences cannot override the state’s requirement for technological superiority. This legal maneuver is paired with the threat of a "supply chain risk" designation, a penalty typically reserved for foreign adversaries like Huawei or Kaspersky.[2] For a domestic company like Anthropic, such a label would not only terminate its current two-hundred-million-dollar contract but could also legally bar other government contractors and potentially private-sector partners from doing business with the firm, creating a de facto secondary boycott.
Anthropic’s resistance is rooted in concerns about the inherent unreliability of large language models in high-stakes environments. Company leadership has warned that models like Claude are still prone to hallucinations and lack the situational awareness necessary to manage lethal force or domestic intelligence without catastrophic errors. Specifically, the company’s "red lines" prohibit the model from being the final arbiter of life-and-death decisions, citing the risk of unintended escalation or mass civilian casualties. Furthermore, the company has expressed alarm over the potential for AI to be used to monitor millions of private conversations, a capability that could be used to suppress dissent or identify "disloyalty" with a speed and scale previously unimaginable. These concerns are not purely hypothetical; reports have emerged suggesting that Claude was already used via a partnership with Palantir to assist in the planning of a high-profile military operation in South America, sparking internal debate at Anthropic about how its technology is being integrated into kinetic operations.[8]
The competitive landscape in the AI industry is adding further pressure to Anthropic’s position.[9] While Anthropic was the first frontier AI developer to be approved for the military’s classified networks, its rivals have moved quickly to demonstrate their own willingness to cooperate with the Pentagon’s new directives. Organizations like OpenAI and Google have recently revised their public-facing principles to remove or soften long-standing prohibitions against military and warfare applications.[10] Meanwhile, Elon Musk’s xAI has rapidly secured clearances for classified work, with its model being touted by defense officials as a compliant alternative that lacks the "roadblocks" of its competitors. The Pentagon’s strategy appears designed to force a choice: comply and remain a central player in the emerging AI-military industrial complex, or be replaced by competitors who are eager to fill the vacuum.[1]
For the broader AI industry, the outcome of this Friday’s deadline will likely set a permanent precedent for the relationship between software developers and the state. If Anthropic yields, it may signal the end of the "safety-first" era for commercial AI, proving that corporate ethical frameworks are ultimately subordinate to national security mandates. If the company holds its ground and the government follows through on its threats, it could trigger a protracted legal battle over the limits of executive power and the definition of intellectual property in the age of generative intelligence. The confrontation underscores a burgeoning reality where the most powerful tools in Silicon Valley are no longer viewed merely as commercial products, but as strategic weapons of national importance that the government is prepared to commandeer by any means necessary.
As the deadline approaches, the silence from Anthropic’s headquarters suggests a company weighing its survival against its founding principles.[1] The Pentagon, for its part, shows no sign of backing down, with officials insisting that the responsibility for the lawful use of technology lies solely with the government and not with the vendors who supply it. The resolution of this standoff will determine whether the creators of artificial intelligence retain any voice in how their inventions are used in war, or whether they will be reduced to mere suppliers for an administration that views AI as the ultimate engine of national power.[9] The choice made by five o'clock this Friday will resonate through the halls of both the Pentagon and Silicon Valley for decades to come, defining the boundaries of human oversight in an increasingly automated world.