Pentagon says Anthropic AI can't be used for war missions

The Pentagon will not use AI models that cannot be used in war. This is a big change for companies like Anthropic.

A top Pentagon official has detailed specific instances, described as "holy cow" moments, that precipitated a significant fallout with artificial intelligence firm Anthropic. These events, which involved concerns over AI model shutdowns during critical missions and alleged restrictions on their use in sensitive operations, led to the Department of Defense formally designating Anthropic as a 'supply chain risk.' This designation effectively bars defense contractors from engaging with the AI company.

The dispute centers on contractual limitations within Anthropic's AI models that, according to Pentagon official Emil Michael, could impede U.S. military operations. Michael cited an incident following a U.S. raid in Venezuela where an executive from an unnamed AI company questioned the use of its software. He also mentioned dozens of restrictions within commercial AI contracts, signed during the Biden administration, that allegedly limit command capabilities over regions including Iran, China, and South America.

Read More: Best University Degrees to Choose in 2025 to Avoid Job Loss from Artificial Intelligence

BREAKDOWN IN TALKS OVER AI DEPLOYMENT

The disagreement has reached a critical juncture, with the partnership's future on the line. Anthropic's AI models are currently deployed on the Pentagon's classified networks via a partnership with data analytics firm Palantir. The Pentagon's stance, articulated by Defense Secretary Pete Hegseth, is that it "will not employ AI models that won't allow you to fight wars." Michael has publicly stated that the military seeks to use AI "like any other technology," implying that its application should align with lawful purposes.

"You can't put the rules and the policies of the United States military and the government in the hands of one private company," Michael stated, pushing back against the idea that AI company executives could dictate operational parameters.

Anthropic CEO Dario Amodei, however, has argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons" and that such systems "cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day." He further indicated that Anthropic would likely challenge the Pentagon's designation in court. Michael responded to Amodei's statements by calling him a "liar" with a "God-complex," asserting that Amodei's agenda is to "personally control the US Military" and risks national safety.

Read More: Kristi Noem removed as Homeland Security Chief after immigration policy criticism

BACKGROUND TO THE DISPUTE

The conflict appears to be partly ideological, with Michael suggesting Anthropic's executives are "afraid of the power of AI." The Trump administration, meanwhile, has advocated against stringent AI regulations, arguing they could hinder innovation and the competitiveness of the American AI industry, while also warning against what they term "woke" AI models. Michael has publicly advocated for "sensible AI regulation." The Pentagon is reportedly exploring alternatives, including models from OpenAI.

Frequently Asked Questions

Q: Why did the Pentagon stop using Anthropic AI for war missions?
A top Pentagon official said Anthropic AI models have rules that stop them from being used in important military missions. This led the Pentagon to call Anthropic a 'supply chain risk'.
Q: What does 'supply chain risk' mean for Anthropic?
It means defense companies that work with the Pentagon are not allowed to work with Anthropic anymore. This is a big problem for Anthropic's business with the military.
Q: What specific problems did the Pentagon find with Anthropic AI?
The Pentagon said Anthropic AI models shut down during important missions and had rules that limited their use in areas like Iran and China. They also said some AI companies questioned the use of their software after military raids.
Q: What does Anthropic say about its AI models?
Anthropic's CEO said their AI systems are not good enough for fully automatic weapons and cannot make the same smart choices as trained soldiers. He plans to fight the Pentagon's decision in court.
Q: What happens next between the Pentagon and Anthropic?
The Pentagon is looking for other AI models, like those from OpenAI. Anthropic plans to challenge the Pentagon's decision in court. The future of their partnership is unclear.