A federal choose has quickly blocked the Pentagon from designating Anthropic a “supply-chain threat to nationwide safety,” ruling that the Trump administration unlawfully tried to punish the AI firm for publicly criticizing the federal government’s place on utilizing AI.
“The file helps an inference that Anthropic is being punished for criticizing the federal government’s contracting place within the press,” U.S. District Decide Rita Lin wrote in Thursday’s order. “Punishing Anthropic for bringing public scrutiny to the federal government’s contracting place is basic unlawful First Modification retaliation.”
The choose’s order, which follows court docket proceedings earlier this week, is about to take impact in seven days, permitting the Trump administration to pursue an attraction.
The Anthropic AI emblem is displayed on a cell phone.
Jonathan Raa/NurPhoto by way of Getty Photographs
Lin mentioned the efforts to limit any authorities use of Anthropic’s AI chatbot, Claude, “seem designed to punish Anthropic.”
“These broad measures don’t look like directed on the authorities’s acknowledged nationwide safety pursuits. If the priority is the integrity of the operational chain of command, the Division of Conflict may simply cease utilizing Claude. As a substitute, these measures seem designed to punish Anthropic,” Lin wrote.
Anthropic “has proven that these broad punitive measures had been possible illegal and that it’s struggling irreparable hurt from them,” she continued.
Lin additionally rebuked the Trump administration for baselessly claiming that Anthropic would possibly attempt to sabotage the navy based mostly on ideological points.
“Nothing within the governing statute helps the Orwellian notion that an American firm could also be branded a possible adversary and saboteur of the U.S. for expressing disagreement with the federal government,” she wrote.
The order successfully restores the established order previous to Protection Secretary Pete Hegseth’s February directive to designate the corporate a “provide chain threat.” Lin acknowledged that the Pentagon can nonetheless try to section out Claude from nationwide safety functions utilizing different lawful means.

President Donald Trump speaks subsequent to Protection Secretary Pete Hegseth throughout a cupboard assembly on the White Home in Washington, March 26, 2026.
Evelyn Hockstein/Reuters
“It’s the Division of Conflict’s prerogative to determine what AI product it makes use of. Everybody, together with Anthropic, agrees that the Division of Conflict could permissibly cease utilizing Claude and search for a brand new AI vendor who will enable ‘all lawful makes use of’ of its expertise. That isn’t what this case is about,” she wrote. “The query right here is whether or not the federal government violated the legislation when it went additional.”
In an announcement, an Anthropic spokesperson mentioned: “We’re grateful to the court docket for transferring swiftly, and happy they agree Anthropic is more likely to succeed on the deserves. Whereas this case was crucial to guard Anthropic, our prospects, and our companions, our focus stays on working productively with the federal government to make sure all Individuals profit from secure, dependable AI.”
Final month, President Donald Trump ordered U.S. authorities companies to cease utilizing Anthropic’s merchandise, and Hegseth designated the AI firm a “provide chain threat,” amid a dispute with the corporate over the usage of its expertise.
The corporate mentioned its synthetic intelligence shouldn’t be used for absolutely autonomous weapons — that means AI, not people, making remaining battlefield concentrating on choices — or for mass home surveillance.

