An artificial intelligence company, Anthropic, has filed lawsuits against the US government, specifically challenging the Department of Defense's designation of the firm as a "supply chain risk." This move by the Pentagon, led by Defense Secretary Pete Hegseth, effectively blocks federal agencies from using Anthropic's AI systems, including its chatbot Claude, which is currently the sole AI model cleared for classified networks. Anthropic argues that this designation is unlawful, exceeds statutory authority, and infringes upon its rights, particularly its First Amendment claims, in a dispute stemming from the company's refusal to grant unrestricted military access to its technology.

Anthropic's legal challenge comes after a protracted conflict over the ethical parameters of AI use in warfare. The company has stated it has two non-negotiable "red lines": it will not allow its technology to be used for the mass surveillance of Americans, nor will it permit its systems to power fully autonomous weapons without human oversight in targeting and firing decisions. In contrast, Secretary Hegseth has asserted the Pentagon's right to utilize AI systems for "any lawful purpose." This fundamental disagreement has led to the current standoff, with federal agencies ordered to cease using Anthropic's services.
Read More: Anthropic sues Pentagon after being called a 'supply chain risk' on February 27

The lawsuits, filed in federal courts in California and Washington D.C., seek to reverse the Pentagon's decision and obtain a stay on the action while the legal proceedings unfold. Anthropic contends that the "supply chain risk" designation, typically reserved for foreign adversaries, is being misused to target a domestic technology company and effectively blacklists them. This designation requires any entity working with the Pentagon to certify they do not use Anthropic's models, potentially impacting existing military operations that still utilize the company's technology, even after the label was applied.
Customers reportedly indicate they are exploring alternative AI solutions due to the Defense Department's designation. Hegseth has indicated that a full phase-out of Anthropic's services could take up to six months, while asserting that Anthropic seeks "veto power" over decisions he believes belong to the Defense Department. The company's legal filings assert that the government's actions are an "unlawful campaign of retaliation" and that judicial intervention is a last resort to protect its rights.
Read More: US Customs Cannot Refund $166 Billion in Tariffs Quickly Due to System Limits
Background of the Dispute
The conflict escalated in late February when reports surfaced of the Department of Defense pressuring Anthropic to remove safeguards from its AI systems. Anthropic CEO Dario Amodei had previously signaled the company's intent to legally contest any adverse government actions. The Pentagon's designation as a "supply chain risk" appears to be an unprecedented move, typically aimed at preventing foreign interference in national security systems. The lawsuits also name other federal agencies, including the Departments of Treasury and State, indicating a broader government-wide directive following the Pentagon's action. Rival AI firm OpenAI reportedly secured a new contract with the Pentagon on the same day Anthropic filed its lawsuits.
Read More: New Tools Help Make Big AI Models Easier to Use in London