Moneycontrol PRO
FiDEX 2026
FiDEX 2026

From contract partner to security risk: The Anthropic–Pentagon dispute explained

A high-stakes dispute between Anthropic and the US Department of War has escalated into a federal ban, with President Donald Trump ordering agencies to stop using the company’s AI tools. At the centre of the clash are two red lines: mass surveillance and fully autonomous weapons.

February 28, 2026 / 09:56 IST
Anthropic vs Pentagon
Snapshot AI
  • Anthropic denies Pentagon AI use for surveillance or lethal force
  • Trump bans federal agencies from using Anthropic products
  • Dispute highlights who controls AI use: companies or government

The past two weeks have exposed a fundamental fault line in the AI era: who decides how powerful AI systems are used, the companies that build them or the government that deploys them?

Here is what the dispute is about and why it has escalated so dramatically.

The core disagreement

Anthropic, led by CEO Dario Amodei, has refused to allow its AI models to be used for mass domestic surveillance of Americans; and fully autonomous weapons that select and strike targets without human input

Anthropic’s position is not anti-military. The company already partners with the Department of War. But it insists that two safeguards must remain in place: no mass surveillance and mandatory human control over lethal force.

Amodei has argued that today’s AI systems are not reliable enough to safely power autonomous weapons or large-scale automated surveillance. In his words, the company prefers to continue serving the military “with our two requested safeguards in place.”

What the Pentagon wants

The Department of War, led by Defense Secretary Pete Hegseth, argues that it should be able to use Anthropic’s technology for any lawful purpose.

The Pentagon’s public stance is straightforward: the military does not want to be constrained by a vendor’s internal policies. Its request, as spokesperson Sean Parnell put it, is to “allow the Pentagon to use Anthropic’s model for all lawful purposes.”

Importantly, US policy does not categorically ban autonomous weapons. A 2023 DoD directive allows AI systems to select and engage targets without human intervention if certain review standards are met. That legal flexibility is precisely what concerns Anthropic.

From the Pentagon’s perspective, limiting “lawful use” could jeopardise operational readiness. From Anthropic’s perspective, lawful does not necessarily mean safe.

Why Trump stepped In

President Donald Trump escalated the dispute by directing federal agencies to cease use of Anthropic products, allowing a six-month phase-out period.

“We don’t need it, we don’t want it, and will not do business with them again,” Trump wrote on Truth Social.

While the President’s post did not initially mention supply chain risk designation, Secretary Hegseth followed up by formally declaring Anthropic a “Supply-Chain Risk to National Security.” That designation effectively blocks contractors, suppliers or partners doing business with the US military from engaging commercially with Anthropic.

In practical terms, it sidelines the company from federal defence ecosystems.

What “Supply Chain Risk” means

Labeling a firm a supply chain risk is serious. It signals that the government believes reliance on that company could undermine national security or operational stability.

In this case, the logic is political and operational. The Pentagon argues that if Anthropic can restrict how its AI is used, it becomes an unreliable defence supplier. If a contractor cannot guarantee full access to its technology for lawful missions, the military may consider it too risky to depend on.

Anthropic, however, has pointed out what it sees as a contradiction: being called both essential to national security and a security risk at the same time.

 The industry context

 The conflict is particularly striking because other AI companies are moving in the opposite direction.

OpenAI has struck a deal with the US Department of War to deploy its models within classified networks, while embedding its own safeguards. Google and Anthropic also received DoD contract awards last July.

That divergence matters. If OpenAI agrees to defence deployments under negotiated terms, while Anthropic holds firm on specific prohibitions, Washington may shift spending accordingly.

What is really at stake?

At its core, this is not just about one contract. It is about governance. If the Pentagon prevails, AI companies may have limited ability to enforce ethical usage policies once national security is invoked. If Anthropic prevails, private firms could set meaningful constraints on how military AI is deployed.

The dispute also reflects a broader ideological tension. Hegseth has publicly criticised what he calls “woke AI,” framing the issue as one of military readiness versus corporate values.

For now, Anthropic has not backed down. Amodei has said that if the Department chooses to offboard the company, it will support a smooth transition to another provider.

The larger question is whether AI companies can realistically maintain red lines once their models become embedded in national defence systems. That answer may shape the future of AI policy far beyond this single confrontation.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

MC Tech Desk Read the latest and trending tech news—stay updated on AI, gadgets, cybersecurity, software updates, smartphones, blockchain, space tech, and the future of innovation.
first published: Feb 28, 2026 09:56 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347