Microsoft and Ex-Generals Support AI Firm Anthropic in Pentagon Court Case

Microsoft and former military leaders are supporting AI company Anthropic in a court battle against the Pentagon. This case could change how AI is used in defense.

SAN FRANCISCO - A significant bloc, including Microsoft and a cadre of retired military figures, is wading into a federal court battle, urging a judge to quash the Pentagon's recent moves against the artificial intelligence company Anthropic. At the core of the dispute is the Trump administration's designation of Anthropic as a "supply chain risk," a label the AI firm argues is a retaliatory measure. This designation effectively cuts off Anthropic from defense contracts, a tool historically meant to fend off foreign adversaries.

Microsoft backs Anthropic, urging judge to halt Pentagon's actions against AI company - 1

Anthropic, the creator of the AI model Claude, initiated legal proceedings against the administration, alleging an "unlawful campaign of retaliation." The company's defiance stemmed from its refusal to permit unrestricted military deployment of its technology, citing ethical boundaries. This stance reportedly led to stalled contract negotiations and the subsequent government action. Microsoft, a major government contractor and partner in integrating Anthropic's AI, stated in its court filing that using a supply chain risk designation for a contract disagreement could trigger adverse economic consequences, potentially against the public interest.

Read More: AI Tilly Norwood Music Video Released With No One Asking For It

Microsoft backs Anthropic, urging judge to halt Pentagon's actions against AI company - 2

Courtroom Showdown

The case is being heard in federal court in San Francisco, where Anthropic is based. Among those filing in support of Anthropic is a group of former military leaders, including Michael Hayden, former CIA director and retired Air Force general, and retired Coast Guard Admiral (details on admiral not specified). Their involvement underscores the broader implications of the dispute, which extends to how artificial intelligence interfaces with national security and ethical considerations in warfare and surveillance.

Microsoft backs Anthropic, urging judge to halt Pentagon's actions against AI company - 3

Broader Industry Ripples

This legal entanglement unfolds against a backdrop of intense interest in AI's role in defense. The situation became particularly pointed when, shortly after the government penalized Anthropic, OpenAI, a rival and Microsoft partner, reportedly finalized its own deal with the Pentagon. This juxtaposition highlights the competitive currents and the strategic importance of AI partnerships within the defense sector. It's worth noting that the head of robotics at OpenAI, Caitlin Kalinowski, has since resigned, with her departure occurring around the time of OpenAI's Pentagon agreement.

Read More: Anthropic sues US government over AI national security label on March 9 2026

Microsoft backs Anthropic, urging judge to halt Pentagon's actions against AI company - 4

Microsoft's Position

Microsoft, which has been incorporating Anthropic's generative AI models into its offerings, including the Microsoft 365 Copilot add-on, has indicated its continued support for Anthropic's technology for its clientele. This commitment extends even after the "supply chain risk" designation, with exceptions noted for the U.S. Department of War. Microsoft has also been broadening its AI integration strategy, making models from various developers accessible within its product ecosystem.

Background of the Designation

The Pentagon's action against Anthropic arose from an unusually public disagreement over the company's ethical constraints on military use of its AI. This public dispute and the subsequent "supply chain risk" designation mark a significant development in the government's approach to AI companies seeking defense contracts. Anthropic's lawsuit aims to reverse this designation, contending it was improperly applied.

Read More: Spain's New Hate Speech Tool and EU Energy Plans Affect Citizens

Frequently Asked Questions

Q: Why is Microsoft and former military leaders supporting AI firm Anthropic in court?
Microsoft and retired generals are supporting Anthropic in a San Francisco federal court case. They want the judge to remove the Pentagon's label of Anthropic as a 'supply chain risk'.
Q: What does the 'supply chain risk' label mean for Anthropic?
This label stops Anthropic from getting defense contracts from the U.S. government. Anthropic says this label is a punishment because they refused to allow their AI technology to be used without limits by the military.
Q: What is Anthropic's main argument against the Pentagon?
Anthropic argues that the Pentagon's 'supply chain risk' label is an 'unlawful campaign of retaliation'. They believe the government is punishing them for setting ethical limits on how their AI, like Claude, can be used by the military.
Q: Why is this case important for AI and national security?
The case looks at how artificial intelligence is used in national security and the ethical rules for AI in warfare. Microsoft, a big government contractor, says using this label for contract issues could hurt the public interest and the economy.