Anthropic sued the government of donald trumpasking the federal courts to overturn the decision of the Pentagon to designate AI company as a “supply chain risk” for its refusal to allow unrestricted military use of its technology.
Anthropic filed two separate lawsuits on Monday: one in federal court in California and another in the federal appeals court in Washington, D.C.; each challenges different aspects of the Pentagon’s actions against the company.
The Pentagon last week formally designated the San Francisco tech company as a supply chain risk following an unusually public dispute over how its AI chatbot, Claude, could be used in war.
“These actions are unprecedented and illegal,” Anthropic’s lawsuit says. “The Constitution does not allow the government to exercise its enormous power to punish a company for its protected speech. No federal law authorizes the actions taken here. “Anthropic turns to the judiciary as a last resort to vindicate its rights and stop the Executive’s illegal campaign of retaliation.”
He Department of Defense declined to comment Monday, citing a policy of not commenting on disputed matters.
Anthropic stated that it sought to restrict its technology from being used for two high-level uses: mass surveillance of Americans and fully autonomous weapons. The Secretary of Defense, Pete Hegsethand other officials publicly insisted that the company must accept “all legal uses” of Claude and threatened punishment if the company did not comply.
Designating the company as a supply chain risk prevents the use of Anthropic’s defense work, using authority that was designed to prevent foreign adversaries from harming national security systems. It is the first time the federal government is known to have used such a designation against an American company.
President Trump also said he would order federal agencies to stop using Claude, although he gave the Pentagon six months to phase out a product that is deeply integrated into classified military systems.including those used in the Iran war.
Anthropic’s lawsuit also names other federal agencies, including the Treasury and State departments, after officials ordered employees to stop using Anthropic’s services.
Even as it fights the Pentagon’s actions, Anthropic has sought to convince companies and other government agencies that the Trump administration’s penalty is limited and only affects military contractors when they are using Claude on work for the Department of Defense.
Making that distinction clear is crucial for the company because most of its projected $14 billion in revenue this year comes from companies and government agencies that are using Claude for computer programming and other tasks. More than 500 clients are paying Anthropic at least $1 million a year for Claude, according to a recent investment announcement that had valued the company at $380 billion.
Anthropic stated in a statement Monday that “seeking judicial review does not change our long-standing commitment to leveraging AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”