Moneycontrol PRO
Swing Trading 101
Swing Trading 101

Why the Pentagon and Anthropic are clashing over military use of AI

A contract dispute has exposed a deeper fight over who sets the rules for artificial intelligence on future battlefields.

February 19, 2026 / 12:48 IST
Why the Pentagon and Anthropic are clashing over military use of AI
Snapshot AI
  • Pentagon may deem Anthropic a supply chain risk over AI safeguards
  • Anthropic opposes AI use in mass surveillance, autonomous weapons
  • Ongoing talks aim to control AI deployment rules

For months, the US Department of Defense and Anthropic, the San Francisco-based AI company behind the chatbot Claude, had been quietly negotiating how the Pentagon could use AI on classified systems. Those talks broke into the open this week after reports that the Defense Department was considering labelling Anthropic a “supply chain risk”, a designation that would effectively bar it from military work.

The possibility caught Anthropic by surprise. Inside the company, executives scrambled to understand why a negotiation over safeguards had escalated into something that could sever ties altogether, the New York Times reported.

The core disagreement

At the heart of the conflict is not a technical issue but a philosophical one. Anthropic has pushed for limits on how its AI models can be used, particularly opposing domestic mass surveillance and fully autonomous weapons systems without humans involved in decision-making.

Defense officials, however, bristled at the idea that a private company would try to dictate how the US military deploys technology. To them, Anthropic’s demands looked like resistance to the Pentagon’s authority, and, in some cases, ideological interference in national security decisions.

That clash reflects a broader tension in the Trump administration, which has promoted rapid expansion of AI use and rolled back restrictions it sees as slowing American technological dominance.

Why Anthropic stands apart

Anthropic has long taken a more cautious stance on AI than many of its competitors. Its chief executive, Dario Amodei, has publicly warned about catastrophic risks from advanced AI and has argued for strict guardrails.

Internally, the company bars its models from being used to facilitate violence. Publicly, Amodei has written that using AI for mass surveillance or propaganda is illegitimate and that autonomous weapons could be turned inward by governments against their own citizens.

Those views are popular in parts of Silicon Valley but sit uneasily with a military establishment preparing for technology-driven warfare.

How the Pentagon reacted

Tensions worsened after reports that Anthropic employees raised questions about whether Claude had been used in a US operation involving Venezuela’s president. Defence officials interpreted that as an attempt to second-guess military actions after the fact.

Pentagon leaders, including US Defense Secretary Pete Hegseth, have made clear they expect contractors to support warfighters without imposing additional constraints beyond the law. In January, Hegseth issued a memo urging AI firms to remove usage restrictions, prompting Anthropic to seek renegotiation instead.

Why this matters beyond one contract

Anthropic’s technology is deeply embedded in Pentagon workflows. Claude has been the most widely used AI system inside the Defense Department and the only one operating on classified networks, thanks to its integration with Palantir’s platforms.

Replacing it would not be easy or quick. Other major AI providers currently operate only on unclassified systems.

More broadly, the dispute highlights how political AI has become. What once looked like a technical conversation about safeguards is now a proxy fight over power, ideology and who gets to define acceptable risk.

What comes next

Despite the public sparring, talks between the Pentagon and Anthropic are still ongoing. Analysts warn that a complete rupture would serve neither side. The military wants advanced tools. Anthropic wants influence over how those tools are used.

The outcome could set a precedent for how much control AI companies retain once their systems become part of national defence infrastructure. In that sense, this is less about one chatbot and more about the rules of engagement for artificial intelligence itself.

MC World Desk
first published: Feb 19, 2026 12:48 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347