Hegseth declares Anthropic a supply chain risk, restricting military contractors from doing business with AI giant
Defense Secretary Pete Hegseth has made a bold move by labeling artificial intelligence firm Anthropic as a “supply chain risk to national security.” This decision comes after a heated public conflict over Anthropic’s attempt to impose restrictions on the Pentagon’s use of its AI technology.
In a statement on X, Hegseth declared that any contractor, supplier, or partner doing business with the United States military is prohibited from engaging in any commercial activity with Anthropic. This decision, which is effective immediately, could have far-reaching implications due to the numerous companies that have contracts with the Pentagon.
Hegseth emphasized that America’s warfighters will not be subject to the whims of Big Tech and that the decision to cut ties with Anthropic is final. President Trump also announced that all federal agencies must cease using Anthropic’s services, with the exception of the Defense Department and a few other agencies, which have a six-month grace period to transition to alternative technologies.
The rift between Anthropic and the Pentagon escalated over disagreements about the appropriate use of AI in national security and the potential risks associated with its deployment. Anthropic, the sole AI firm with its model deployed on the Pentagon’s classified networks, has been advocating for guardrails to prevent its technology from being utilized for mass surveillance or autonomous military operations without human oversight. On the other hand, the Pentagon insisted that any agreement should allow the use of Anthropic’s Claude model for “all lawful purposes.”
The Pentagon set a deadline for Anthropic to reach a resolution by Friday at 5:01 p.m. or risk losing its lucrative military contracts. As discussions broke down, Pentagon officials accused Anthropic of trying to impose its views on the military. Hegseth criticized Anthropic as “sanctimonious” and accused the company of attempting to coerce the military into compliance.
In response, Anthropic CEO Dario Amodei argued that guardrails are essential because their AI model, Claude, is not flawless enough to power fully autonomous weapons and could raise privacy concerns. Amodei emphasized that the company does not seek to restrict the military’s use of its technology arbitrarily but believes that AI can pose risks to democratic values in certain scenarios.
Despite the Pentagon offering concessions by acknowledging existing laws and policies restricting mass surveillance and autonomous weapons, Anthropic deemed the language inadequate, citing legal loopholes that could bypass these safeguards.
The dispute underscores the complex relationship between AI technology and national security and raises questions about the appropriate boundaries for its application in defense operations. As the Pentagon and Anthropic remain at odds, the future of their partnership and the broader implications for AI in military settings remain uncertain.



