The U.S. Department of Defense has initiated an investigation into the circumstances surrounding a recent targeting incident in Iran, specifically concerning the use of artificial intelligence systems. This probe follows a viral video alleging that an AI known as Claude may have been involved in identifying or selecting a school as a target.

The Pentagon's investigation aims to understand the extent to which AI, and potentially Claude specifically, was integrated into the decision-making process for military operations. This inquiry comes amidst reports that the U.S. military has been utilizing Claude, developed by Anthropic, for various tasks related to its campaign in Iran, including intelligence gathering, target selection, and battlefield simulations, even after a reported dispute and subsequent severance of ties with Anthropic by the Trump administration.

AI in the Crosshairs
A viral video circulating online has placed Claude, an AI model from Anthropic, at the center of controversy, claiming its potential involvement in the targeting of a school in Iran. This has spurred the Pentagon to review its artificial intelligence usage. The investigation is expected to delve into how AI systems are employed within the military's operational chains. While the Pentagon has confirmed the review, it has not explicitly stated whether AI played a direct role in the targeting incident at the Shajareh Tayyebeh girls’ school.
Read More: Light Controlled by Light Switches in Nanoseconds in Liquid Crystals

Sources familiar with the matter have indicated that the U.S. military has indeed been using Claude for its operations in Iran, a deployment that reportedly continues. The specifics of Claude's application by the Pentagon remain somewhat opaque. Michael, the Pentagon's chief technology officer, has stated that Claude is used for synthesizing documents and improving logistics and supply chains, among other functions.

A Complex Relationship with AI
The reported use of Claude by the U.S. military in Iran is notable given a recent dispute between the Pentagon and Anthropic. Reports suggest that despite a government-wide ban announced after a disagreement, Claude's capabilities were still being leveraged. This situation has raised questions about human oversight and the broader implications of integrating AI into warfare. The potential difficulty in replacing Claude's functionalities could mean the Pentagon faces a gap in its AI capabilities for some time, potentially up to three months or more, as it seeks alternative platforms.
Read More: UK Households Face Higher Costs as Iran Conflict and Oil Prices Rise
Broader Questions of AI and Warfare
This controversy is not an isolated incident but rather a symptom of a wider, ongoing debate about the increasing role of artificial intelligence in modern conflict. The allegations and subsequent investigation highlight critical questions regarding accountability, the potential for autonomous weapons, and the ethical considerations of deploying AI in sensitive targeting scenarios. The situation underscores a significant shift in how the U.S. military approaches AI integration, with potential consequences for defense strategies and international relations.
Background:
Reports emerged that the U.S. military had been employing Claude for critical aspects of its operations in Iran. This included using the AI for intelligence purposes, assisting in target selection, and conducting battlefield simulations. This occurred even as former President Trump had reportedly announced a severing of ties with Anthropic, the company behind Claude, due to disputes over the AI's guardrails, which included restrictions against its use for mass surveillance or fully autonomous weapons.
Read More: Amazon AWS AI code errors caused service delays in early 2026
Rival companies like OpenAI, with its ChatGPT models, have reportedly stepped in to fill any potential void, with CEO Sam Altman stating an agreement with the Pentagon for classified network use. The U.S. military's reliance on advanced AI tools, described as having been leveraged to strike over 1,000 targets in the initial 24 hours of its attack on Iran, suggests a deep integration of these technologies into its operational tempo.