A top Pentagon official has detailed specific instances, described as "holy cow" moments, that precipitated a significant fallout with artificial intelligence firm Anthropic. These events, which involved concerns over AI model shutdowns during critical missions and alleged restrictions on their use in sensitive operations, led to the Department of Defense formally designating Anthropic as a 'supply chain risk.' This designation effectively bars defense contractors from engaging with the AI company.
The dispute centers on contractual limitations within Anthropic's AI models that, according to Pentagon official Emil Michael, could impede U.S. military operations. Michael cited an incident following a U.S. raid in Venezuela where an executive from an unnamed AI company questioned the use of its software. He also mentioned dozens of restrictions within commercial AI contracts, signed during the Biden administration, that allegedly limit command capabilities over regions including Iran, China, and South America.
Read More: Best University Degrees to Choose in 2025 to Avoid Job Loss from Artificial Intelligence
BREAKDOWN IN TALKS OVER AI DEPLOYMENT
The disagreement has reached a critical juncture, with the partnership's future on the line. Anthropic's AI models are currently deployed on the Pentagon's classified networks via a partnership with data analytics firm Palantir. The Pentagon's stance, articulated by Defense Secretary Pete Hegseth, is that it "will not employ AI models that won't allow you to fight wars." Michael has publicly stated that the military seeks to use AI "like any other technology," implying that its application should align with lawful purposes.
"You can't put the rules and the policies of the United States military and the government in the hands of one private company," Michael stated, pushing back against the idea that AI company executives could dictate operational parameters.
Anthropic CEO Dario Amodei, however, has argued that "frontier AI systems are simply not reliable enough to power fully autonomous weapons" and that such systems "cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day." He further indicated that Anthropic would likely challenge the Pentagon's designation in court. Michael responded to Amodei's statements by calling him a "liar" with a "God-complex," asserting that Amodei's agenda is to "personally control the US Military" and risks national safety.
Read More: Kristi Noem removed as Homeland Security Chief after immigration policy criticism
BACKGROUND TO THE DISPUTE
The conflict appears to be partly ideological, with Michael suggesting Anthropic's executives are "afraid of the power of AI." The Trump administration, meanwhile, has advocated against stringent AI regulations, arguing they could hinder innovation and the competitiveness of the American AI industry, while also warning against what they term "woke" AI models. Michael has publicly advocated for "sensible AI regulation." The Pentagon is reportedly exploring alternatives, including models from OpenAI.