
What sounds like a line from a political thriller is now playing out in Washington. The US Secretary of War Pete Hegseth has issued a blunt ultimatum to AI startup Anthropic. The company has been told it has until Friday, February 27, to agree to the Trump administration’s conditions for military use of its AI model or risk losing its Pentagon contract altogether.
For Anthropic, the stakes are unusually high. Beyond losing a defence contract, the company could be labelled a “supply chain risk,” a designation that would effectively bar any Pentagon-linked organisation from using its technology. Such a tag is normally reserved for foreign adversaries like China’s Huawei.
What is Anthropic and why does the Pentagon use it?
Anthropic was founded in 2021 by former OpenAI executives and is led by Dario Amodei. The company is best known for building Claude, a large language model that competes with ChatGPT and other AI systems.
What sets Anthropic apart is its emphasis on safety. It describes itself as a Public Benefit Corporation committed to the “responsible development and maintenance of advanced AI for the long-term benefit of humanity.”
At present, Anthropic is the only AI company whose model is deployed on the Pentagon’s classified networks, through a partnership with Palantir. A senior Pentagon official told CBS News that Elon Musk’s xAI and other firms are also preparing for classified use.
What triggered the clash?
Tensions rose after reports said Claude was used during a US operation in January aimed at capturing Venezuelan leader Nicolas Maduro. Following those reports, an Anthropic spokesperson said the company “has not discussed the use of Claude for specific operations with the Department of War”.
Anthropic has also sought written assurances that its AI would not be used for mass surveillance of Americans and would not make final targeting decisions in military operations without human oversight.
A source familiar with the issue said, “Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgement.”
These safeguards put the company at odds with Hegseth, who has argued that AI providers must give the military full freedom to use their systems for lawful operations.
Politics and ‘woke AI’
Anthropic’s stance has also drawn political fire. In October, Trump’s top AI adviser David Sacks accused the company of “running a sophisticated regulatory capture strategy based on fear-mongering.”
Hegseth has echoed that sentiment. In January, he criticised ideological limits on military AI, saying, “Department of War AI will not be woke. It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”
How far has the Pentagon gone?
The dispute escalated on February 24 when the Pentagon warned Anthropic that its contract would be cancelled if it did not comply.
According to sources, Hegseth told Amodei that the contract would be terminated by February 27 unless Anthropic agreed to the Pentagon’s terms. A senior official said the Department of War could also invoke the Defence Production Act and label the company a supply chain risk.
Axios reported that Hegseth has already asked Boeing and Lockheed Martin about their exposure to Claude, a move seen as an early step toward such a designation.
What would this mean for Anthropic?
Losing the Pentagon contract would cost Anthropic about $200 million, a relatively small amount for a company that generated $14 billion in revenue in February.
The bigger risk is reputational and commercial. According to The New York Times, a supply chain risk label could force Anthropic to make its product available for free and severely weaken its market position.
It would also give rivals an advantage. The Pentagon already works with xAI and Google, both of which have dropped restrictions on defence-related AI use.
What happens next?
After the February 24 meeting, Anthropic said it had continued “good-faith conversations” with Pentagon officials but did not address the threat to invoke the Defence Production Act.
At the same time, the company appears to be softening its stance. Anthropic recently updated its Responsible Scaling Policy, saying, “The policy environment has shifted toward prioritising AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”
Amodei also acknowledged commercial pressure in an interview with podcaster Dwarkesh Patel.
Owen Daniels of Georgetown University’s Center for Security and Emerging Technology told the AP that Anthropic’s position is risky. “Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications. So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.”
As the deadline approaches, the standoff is shaping up as a key test of whether safety-first AI can survive in an era of military urgency and political pressure.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.