
The US Department of Defense is close to cutting business ties with artificial intelligence company Anthropic and could designate it a “supply chain risk”, a move that would effectively bar contractors working with the US military from using the company’s AI tools, according to a report published by Axios.
Defense Secretary Pete Hegseth is said to be considering the step following months of tense negotiations over how Anthropic’s AI model, Claude, can be used within military systems.
A senior Pentagon official told Axios, “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.” The report noted that such penalties are typically reserved for foreign adversaries.
Chief Pentagon spokesman Sean Parnell told Axios, “The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people.”
Why this matters
Anthropic’s AI model Claude is currently the only AI system available within the US military’s classified networks and is widely regarded as a leader in enterprise applications. Pentagon officials have reportedly praised its capabilities, and the software was used during the Maduro raid in January, according to Axios.
However, negotiations between the Pentagon and Anthropic have stalled over the scope of permissible military use. Anthropic CEO Dario Amodei has taken a cautious approach, seeking safeguards to ensure the company’s tools are not used for mass surveillance of Americans or for developing fully autonomous weapons without human oversight.
The Pentagon, on the other hand, argues that such restrictions are too limiting and has insisted that AI tools be available for “all lawful purposes.” Similar negotiations are underway with OpenAI, Google and xAI.
An Anthropic spokesperson told Axios, “We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right.” The spokesperson also reiterated the company’s commitment to national security work, noting that Claude was the first frontier AI model deployed on classified US networks.
High stakes for AI and defense
If designated a “supply chain risk”, companies contracting with the Pentagon would be required to certify that they do not use Claude in their own operations. This could have far-reaching implications, as Anthropic recently said that eight of the ten largest US companies use its AI tools.
The Pentagon contract under threat is reportedly valued at up to $200 million — a small portion of Anthropic’s estimated $14 billion annual revenue — but the broader reputational and strategic impact could be significant.
According to Axios, senior US officials are confident that rival AI firms may agree to the Pentagon’s “all lawful use” standard. However, sources familiar with the discussions indicated that negotiations remain fluid.
The standoff highlights growing tensions globally over how advanced AI systems should be deployed in national security settings — and where the line should be drawn between innovation, military use and civil liberties.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.