Judge blocks Pentagon from labeling Anthropic AI a “supply chain risk” and halts Trump’s ban on federal use Today Us News


A judge has blocked the Trump administration from labeling Anthropic a “supply chain risk” and cutting off all federal work with the artificial intelligence firm, an early win for Anthropic in its bitter feud with the government over AI guardrails

U.S. District Judge Rita Lin on Thursday ruled in favor of Anthropic, which sued the federal government earlier this month for taking actions that it called an “unprecedented and unlawful” attempt to punish the company for First Amendment-protected speech.  

Lin’s ruling in the case prevents the government from enforcing its supply chain risk designation against Anthropic, a move that aimed to stop private government contractors from using the company’s powerful Claude AI model. It also halts an order by President Trump for every federal agency to “IMMEDIATELY CEASE all use of Anthropic’s technology.”

In the ruling, she called the administration’s moves “Orwellian” and said they could “cripple” the company. “At bottom, Anthropic has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them,” she wrote.

The dispute revolves around Anthropic’s push to bar the military from using Claude for domestic surveillance or to power fully autonomous weapons. The Defense Department has said it needs the ability to use AI for “all lawful purposes.”

The judge wrote that her ruling does not stop the Trump administration from taking “lawful actions” that were allowed beforehand, and that it is free to choose a different AI provider instead of Anthropic.

Lin stayed her order for seven days, giving the government an opportunity to appeal. 

In a statement after the ruling, a spokesperson for Anthropic said, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

CBS News has reached out to the Pentagon and the Justice Department for comment.

What did the Anthropic ruling say?

In an often-scathing 43-page ruling, Lin wrote that the government’s moves against the company “appear designed to punish Anthropic.” She said the Pentagon can choose to use whatever AI products it wants, but that the government “went further.”

“The record supports an inference that Anthropic is being punished for criticizing the government’s contracting position in the press,” she wrote. “…Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”

She pointed to some officials’ heated comments about Anthropic, including a post by Defense Secretary Pete Hegseth that called the company “sanctimonious” and said it “delivered a master class in arrogance.”

The judge also took issue with the Trump administration’s labeling of Anthropic a “supply chain risk,” a formal designation that federal law defines as a “risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert” a national security system. 

Lin wrote that the government hadn’t shown why Anthropic posed that kind of risk and hadn’t followed the required legal processes for determining that an entity is a supply chain risk.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin said.

She said Anthropic’s due process rights were likely violated because the company didn’t have an opportunity to respond to the government’s moves against it. She said Mr. Trump’s order for federal agencies to stop using Anthropic immediately was essentially a form of “debarment,” or a ban on a company contracting with the government — but usually, firms that face debarment have the ability to oppose that measure.

And she called the government’s actions “arbitrary and capricious,” pointing to cordial contract negotiation emails between Pentagon Chief Technology Officer Emil Michael and Anthropic CEO Dario Amodei even as the military called Anthropic a serious threat.

After the administration took action against Anthropic, Lin noted, federal agencies aside from the Pentagon quickly terminated their use of Claude, endangering its lucrative public sector business. And Anthropic has said some government contractors are worried that they could run afoul of the president’s order if they use Claude, wrote Lin.

“One of the amicus briefs described these measures as ‘attempted corporate murder,'” Lin wrote. “They might not be murder, but the evidence shows that they would cripple Anthropic.”

Lin also formally rejected a social media post by Hegseth that said military contractors must cut off all “commercial activity” with Anthropic — which she said seemed to illegally require companies to stop using Claude on non-military work.

During a hearing in San Francisco earlier this week, Justice Department attorney Eric Hamilton conceded that a supply chain risk designation would only stop government contractors from using Anthropic’s technology for military-related work, not their other business. Anthropic argued that Hegseth’s post still caused damage to the company.


Leave a Reply

Your email address will not be published. Required fields are marked *