What’s behind the Anthropic-Pentagon feud
The Pentagon’s Ultimatum to Anthropic: Unrestricted Use of AI Technology or Face Ban from Government Contracts
In a bold move this week, the Pentagon issued Anthropic an ultimatum: grant the U.S. military unrestricted access to its AI technology or risk being banned from all government contracts. The crux of the issue lies in the control over the use of artificial intelligence models – should it be in the hands of the Pentagon or the CEO of the company?
The Pentagon’s AI Contracts
Back in July, the Pentagon awarded Anthropic a lucrative $200 million contract to develop AI capabilities aimed at enhancing U.S. national security. Competitors such as OpenAI, Google, and xAI also secured similar contracts from the Pentagon last year. Anthropic currently stands as the sole AI company with its model deployed on the Pentagon’s classified networks, thanks to a collaboration with data analytics giant Palantir.
A senior Pentagon official disclosed to CBS News that Grok, owned by Elon Musk’s xAI, is inclined towards being used in a classified setting, with other AI companies following suit. The Pentagon recently announced its intentions to expedite the utilization of AI, citing its potential to swiftly process intelligence data and empower Warfighters with enhanced lethality and efficiency.
Clash over the Guardrails
The standoff between the Pentagon and Anthropic reportedly stemmed from the military’s deployment of Anthropic’s technology, known as Claude, during the operation to capture former Venezuela President Nicolás Maduro in January. Anthropic clarified that they had not discussed the specific operational use of Claude with the Department of War.
Anthropic has persistently urged the Pentagon to agree to certain guardrails, including prohibiting the use of Claude for mass surveillance of Americans and ensuring human involvement in final targeting decisions to prevent potential errors. Pentagon officials voiced concerns that these limitations could impede critical actions, like responding to an imminent intercontinental ballistic missile threat.
In response to queries, a senior Pentagon official emphasized that the military’s orders were lawful, debunking any notions of mass surveillance or autonomous weapons deployment. Anthropic’s guardrails, as suggested by the company, could hinder urgent operational requirements, according to Emil Michael, the undersecretary of defense for research.
What Top Leaders Are Saying
Anthropic’s CEO, Dario Amodei, has been vocal about the risks associated with AI technologies and has centered the company’s ethos around safety and transparency. In a recent extensive essay, Amodei cautioned against the potential misuse of AI, citing the risk of powerful AI surveilling and suppressing dissent among populations.
Amodei advocates for “sensible AI regulation” that mandates transparency on risks posed by AI models and measures taken to mitigate them. In contrast, the Trump administration has favored a lighter regulatory approach, fearing that stringent regulations could stifle innovation and competitiveness in the American AI industry.
Defense Secretary Pete Hegseth underscored the importance of AI models being mission-relevant and devoid of ideological constraints, emphasizing the need for AI to serve military purposes effectively without external influence.
What’s Next in the Anthropic v. Pentagon Saga
Hegseth has set a deadline for Anthropic to comply with the Pentagon’s demands by Friday, failing which the company risks being blacklisted from government contracts. Pentagon officials are contemplating invoking the Defense Production Act to enforce compliance on national security grounds.
If an agreement cannot be reached, defense officials are contemplating labeling Anthropic as a “supply chain risk” to push them out of government contracts. The fate of Anthropic hangs in the balance as the standoff between the company and the Pentagon intensifies.



