SAN FRANCISCO - A significant bloc, including Microsoft and a cadre of retired military figures, is wading into a federal court battle, urging a judge to quash the Pentagon's recent moves against the artificial intelligence company Anthropic. At the core of the dispute is the Trump administration's designation of Anthropic as a "supply chain risk," a label the AI firm argues is a retaliatory measure. This designation effectively cuts off Anthropic from defense contracts, a tool historically meant to fend off foreign adversaries.

Anthropic, the creator of the AI model Claude, initiated legal proceedings against the administration, alleging an "unlawful campaign of retaliation." The company's defiance stemmed from its refusal to permit unrestricted military deployment of its technology, citing ethical boundaries. This stance reportedly led to stalled contract negotiations and the subsequent government action. Microsoft, a major government contractor and partner in integrating Anthropic's AI, stated in its court filing that using a supply chain risk designation for a contract disagreement could trigger adverse economic consequences, potentially against the public interest.
Read More: AI Tilly Norwood Music Video Released With No One Asking For It
Courtroom Showdown
The case is being heard in federal court in San Francisco, where Anthropic is based. Among those filing in support of Anthropic is a group of former military leaders, including Michael Hayden, former CIA director and retired Air Force general, and retired Coast Guard Admiral (details on admiral not specified). Their involvement underscores the broader implications of the dispute, which extends to how artificial intelligence interfaces with national security and ethical considerations in warfare and surveillance.

Broader Industry Ripples
This legal entanglement unfolds against a backdrop of intense interest in AI's role in defense. The situation became particularly pointed when, shortly after the government penalized Anthropic, OpenAI, a rival and Microsoft partner, reportedly finalized its own deal with the Pentagon. This juxtaposition highlights the competitive currents and the strategic importance of AI partnerships within the defense sector. It's worth noting that the head of robotics at OpenAI, Caitlin Kalinowski, has since resigned, with her departure occurring around the time of OpenAI's Pentagon agreement.
Read More: Anthropic sues US government over AI national security label on March 9 2026

Microsoft's Position
Microsoft, which has been incorporating Anthropic's generative AI models into its offerings, including the Microsoft 365 Copilot add-on, has indicated its continued support for Anthropic's technology for its clientele. This commitment extends even after the "supply chain risk" designation, with exceptions noted for the U.S. Department of War. Microsoft has also been broadening its AI integration strategy, making models from various developers accessible within its product ecosystem.
Background of the Designation
The Pentagon's action against Anthropic arose from an unusually public disagreement over the company's ethical constraints on military use of its AI. This public dispute and the subsequent "supply chain risk" designation mark a significant development in the government's approach to AI companies seeking defense contracts. Anthropic's lawsuit aims to reverse this designation, contending it was improperly applied.
Read More: Spain's New Hate Speech Tool and EU Energy Plans Affect Citizens