
Anthropic CEO Dario Amodei has said the company will challenge the United States government’s decision to designate it a supply-chain risk to national security, calling the move legally questionable and narrower in scope than widely perceived.
In a statement addressing the situation, Amodei said Anthropic had received a formal letter from the US Department of War confirming the designation. The company, he said, does not believe the action is legally sound and intends to contest it in court.
Amodei argued that the designation applies only in a limited context. According to the letter, the restriction affects the use of Anthropic’s Claude AI models only when they are directly part of contracts with the Department of War, rather than all government or commercial uses of the technology.
He said the scope reflects the narrow nature of the statute cited by the government, which requires authorities to use the least restrictive means necessary to protect supply chains. Even contractors working with the Department of War, he added, are not restricted from using Claude or maintaining business relationships with Anthropic for work unrelated to specific defence contracts.
The CEO also indicated that discussions with the Department had been progressing constructively before the dispute escalated. According to Amodei, the company had been exploring ways to continue supporting the military while maintaining its two key restrictions: prohibiting the use of its AI in fully autonomous weapons and in mass domestic surveillance.
Anthropic had previously secured a $200 million contract with the Department of War, becoming the first major AI model provider to deploy systems within the US government’s classified networks. Amodei said the company had been proud to support a range of defence-related applications, including intelligence analysis, modelling and simulation, operational planning, and cyber operations.
He reiterated that Anthropic does not believe private companies should participate in operational military decision-making. Those decisions, he said, belong to the armed forces, while the company’s concerns remain limited to broader categories of AI use.
Amodei also apologised for the tone of an internal company post that was recently leaked to the press. He said Anthropic did not leak the message and had no interest in escalating tensions with the government.
The post, he explained, had been written during a particularly turbulent moment for the company, shortly after the US president announced Anthropic would be removed from federal systems, the defence secretary publicly confirmed the supply-chain risk designation, and OpenAI revealed a new deal with the Pentagon.
Amodei said the memo was written quickly under pressure and no longer reflects his considered view of the situation, noting that it was already several days out of date.
Despite the dispute, he emphasised that Anthropic’s immediate priority is ensuring that US military personnel and national security experts are not deprived of tools they rely on during active combat operations.
The company will continue providing its models to the Department of War and the broader national security community at nominal cost, with engineering support, for as long as the transition period requires and as long as it is permitted to do so.
Amodei concluded that Anthropic and the Department of War ultimately share the same broader objective: strengthening US national security and accelerating the responsible adoption of AI across government systems.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.