Anthropic Sues Pentagon Over "Supply Chain Risk" AI Ban in California

Anthropic is suing the Pentagon because its AI tools, like Claude, are now banned for government use. This is a major legal fight over AI safety rules.

An artificial intelligence company, Anthropic, has filed lawsuits against the US government, specifically challenging the Department of Defense's designation of the firm as a "supply chain risk." This move by the Pentagon, led by Defense Secretary Pete Hegseth, effectively blocks federal agencies from using Anthropic's AI systems, including its chatbot Claude, which is currently the sole AI model cleared for classified networks. Anthropic argues that this designation is unlawful, exceeds statutory authority, and infringes upon its rights, particularly its First Amendment claims, in a dispute stemming from the company's refusal to grant unrestricted military access to its technology.

Anthropic sues US government over supply chain risk designation - 1

Anthropic's legal challenge comes after a protracted conflict over the ethical parameters of AI use in warfare. The company has stated it has two non-negotiable "red lines": it will not allow its technology to be used for the mass surveillance of Americans, nor will it permit its systems to power fully autonomous weapons without human oversight in targeting and firing decisions. In contrast, Secretary Hegseth has asserted the Pentagon's right to utilize AI systems for "any lawful purpose." This fundamental disagreement has led to the current standoff, with federal agencies ordered to cease using Anthropic's services.

Read More: Anthropic sues Pentagon after being called a 'supply chain risk' on February 27

Anthropic sues US government over supply chain risk designation - 2

The lawsuits, filed in federal courts in California and Washington D.C., seek to reverse the Pentagon's decision and obtain a stay on the action while the legal proceedings unfold. Anthropic contends that the "supply chain risk" designation, typically reserved for foreign adversaries, is being misused to target a domestic technology company and effectively blacklists them. This designation requires any entity working with the Pentagon to certify they do not use Anthropic's models, potentially impacting existing military operations that still utilize the company's technology, even after the label was applied.

Customers reportedly indicate they are exploring alternative AI solutions due to the Defense Department's designation. Hegseth has indicated that a full phase-out of Anthropic's services could take up to six months, while asserting that Anthropic seeks "veto power" over decisions he believes belong to the Defense Department. The company's legal filings assert that the government's actions are an "unlawful campaign of retaliation" and that judicial intervention is a last resort to protect its rights.

Read More: US Customs Cannot Refund $166 Billion in Tariffs Quickly Due to System Limits

Background of the Dispute

The conflict escalated in late February when reports surfaced of the Department of Defense pressuring Anthropic to remove safeguards from its AI systems. Anthropic CEO Dario Amodei had previously signaled the company's intent to legally contest any adverse government actions. The Pentagon's designation as a "supply chain risk" appears to be an unprecedented move, typically aimed at preventing foreign interference in national security systems. The lawsuits also name other federal agencies, including the Departments of Treasury and State, indicating a broader government-wide directive following the Pentagon's action. Rival AI firm OpenAI reportedly secured a new contract with the Pentagon on the same day Anthropic filed its lawsuits.

Read More: New Tools Help Make Big AI Models Easier to Use in London

Frequently Asked Questions

Q: Why did Anthropic sue the US government?
Anthropic sued because the Department of Defense called it a "supply chain risk." This stops US agencies from using Anthropic's AI tools like Claude. Anthropic says this is wrong and illegal.
Q: What does the "supply chain risk" label mean for Anthropic?
This label means federal agencies cannot use Anthropic's AI systems. It's usually for foreign threats, but Anthropic says it's being used unfairly against a US company. This impacts its business with the government.
Q: What are Anthropic's main concerns about AI use?
Anthropic has two main rules: no mass spying on Americans and no fully automatic weapons that target people without human control. The Pentagon wants to use AI for any legal purpose, causing a disagreement.
Q: What does Anthropic want the courts to do?
Anthropic wants the courts to cancel the Pentagon's "supply chain risk" decision. They also want the ban on their AI tools to be paused while the court case happens.
Q: How long will it take for the Pentagon to stop using Anthropic's AI?
Defense Secretary Pete Hegseth said it could take up to six months to fully stop using Anthropic's services. Anthropic believes the government's action is like revenge for them not removing safety rules.