Top Stories

Judge temporarily blocks Pentagon’s ‘supply chain risk’ designation for Anthropic

A recent ruling by a federal judge has temporarily halted the Pentagon’s efforts to label Anthropic as a “supply-chain risk to national security.” The judge, U.S. District Judge Rita Lin, determined that the Trump administration unlawfully attempted to penalize the AI company for publicly criticizing the government’s stance on AI usage.

Judge Lin’s order, issued after court proceedings earlier this week, is scheduled to go into effect in seven days, giving the Trump administration the opportunity to appeal. The ruling asserts that punishing Anthropic for voicing concerns about the government’s contracting decisions violates the company’s First Amendment rights.

The judge highlighted that the measures aimed at restricting the government’s use of Anthropic’s AI chatbot, Claude, seemed punitive rather than motivated by genuine national security concerns. She emphasized that the Department of War could simply choose not to utilize Claude if there were integrity issues with the operational chain of command.

Furthermore, Judge Lin criticized the Trump administration’s unfounded claims that Anthropic might pose a sabotage threat to the military due to ideological differences. She stated that branding an American company as a potential adversary for expressing dissent is not supported by law.

The court’s decision effectively reverses Defense Secretary Pete Hegseth’s directive to designate Anthropic as a supply chain risk, allowing the Pentagon to explore alternative methods to phase out Claude from national security applications.

In response to the ruling, an Anthropic spokesperson expressed gratitude for the court’s swift action and reaffirmed the company’s commitment to collaborating with the government to ensure the safe and effective use of AI technology.

President Trump’s order last month to cease government agencies’ use of Anthropic’s products stemmed from a disagreement over the company’s stance on AI deployment. Anthropic has maintained that its AI technology should not be employed in fully autonomous weapons or mass domestic surveillance scenarios.

Overall, the court’s decision serves as a reminder of the importance of upholding constitutional rights and legal standards in government actions related to emerging technologies like AI. It underscores the need for transparent and lawful approaches to addressing security concerns while fostering innovation and collaboration in the tech industry.

Related Articles

Back to top button