Moneycontrol PRO
Swing Trading 101
Swing Trading 101

Anthropic vs Trump government: A timeline of how the standoff on AI guardrails led to a lawsuit

Anthropic's dual lawsuits against the United States government mark the latest escalation in the intensifying battle between the Claude maker and the department of defense

March 10, 2026 / 11:49 IST
Anthropic CEO Dario Amodei and US Defense Secretary Pete Hegseth (Photo by Ludovic MARIN and Brendan SMIALOWSKI / AFP)
Snapshot AI
  • Anthropic sues US over being labeled a supply chain risk
  • Debate over AI safeguards for surveillance and lethal weapons
  • OpenAI, Google researchers back Anthropic's stance in court

Anthropic has sued the United States government for designating the artificial intelligence (AI) research company as a "supply chain risk", a label usually reserved for foreign adversaries.

In its March 9 lawsuit, the Claude maker said the actions of the department of defense (department of war) were "unprecedented and unlawful", arguing that the US Constitution does not allow the government to wield its "enormous power to punish a company for its protected speech".

This marks the latest escalation in a growing standoff between Anthropic and the Trump administration over the military's use of the company's AI tools.

Last week, Anthropic co-founder Dario Amodei said the company had "no choice" but to challenge the designation in court as it didn't believe the "action is legally sound".

At the centre of the debate is Anthropic's refusal to remove guardrails that prohibit its AI tools from being used for mass domestic surveillance and the development of fully autonomous lethal weapons.

The administration has argued that the military should be permitted to deploy advanced AI systems for lawful purposes. Currently, Claude is the only AI model deployed in US classified military systems. Here's a look at the key developments leading up to the lawsuit:

November 2024: Anthropic and Palantir announce a partnership with Amazon Web Services (AWS) to provide US intelligence and defence agencies access to the Claude 3 and 3.5 family of models on AWS. This deal enables Claude to be deployed into the classified systems of the department of defense and other agencies through Palantir's Artificial Intelligence Platform (AIP) within AWS's secure, defence-accredited infrastructure.

July 2025: Anthropic along with several major AI companies, including OpenAI, Google, and Elon Musk's xAI, are awarded contracts of up to $200 million each by the department of defense to accelerate its adoption of AI solutions.

Anthropic agrees to work with the department to develop use cases and, eventually, design a prototype AI service specifically for the department’s use.

Fall of 2025: Anthropic opens negotiations for an another agreement to provide a version of Claude on the department’s “GenAI.mil” AI platform. As part of these discussions, the company is asked to allow Claude to be used for “all lawful uses”.

Anthropic agrees but with two key exceptions. It refuses to permit the deployment of lethal autonomous warfare without human oversight and Claude’s use for mass surveillance.

Amodei later says frontier AI systems are simply not reliable enough to power fully autonomous weapons and hence, it will not knowingly provide a product that puts the country’s warfighters and civilians at risk.

Late 2025: Anthropic's discussions around the additional agreement about deploying Claude on the “GenAI.mil” platform turn into a negotiation over the department’s use of Claude more broadly. It asks Anthropic to allow “all lawful use” of Claude.

January 12, 2026: Defense secretary Pete Hegseth issues a directive, launching a department-wide AI acceleration strategy to turn the US military into an “AI-first” warfighting force across all domains.

The order instructs the department's procurement office to “incorporate standard ‘any lawful use’ language into any DoW (Department of War) contract” for AI services within 180 days.

Hegseth says the agency wouldn’t “employ AI models that won’t allow you to fight wars”, a remark widely understood as a jab against Anthropic.

Late January: Tensions rise between the US military and Anthropic over AI safeguards restricting autonomous weapons and domestic surveillance. Talks for up to $200 million contract are stalled.

January 26: Amodei releases The Adolescence of Technology, a 38-page essay which aims to map out the risks that humans are face and chart out a plan to tackle them involving voluntary measures by companies and government actions.

Amodei argues AI should be used for national defence in "all ways except those which would make us more like our autocratic adversaries". He also describes using AI for domestic mass surveillance and propaganda as "bright red lines and entirely illegitimate". Amodei also talks about approaching fully autonomous weapons with great caution and not rush into their use without proper safeguards.

Feb 12: Anthropic raises $30 billion funding at $380 billion post-money valuation. The round is co-led by DE Shaw Ventures, Dragoneer, Founders Fund, Iconiq and MGX. Accel, Microsoft, Nvidia, General Catalyst, Lightspeed, Menlo Ventures, Qatar Investment Authority (QIA) and Sequoia Capital are among other backers.

February 13: Media reports the US military’s use of Claude, through an integration with Palantir, during the January operation to capture Nicolás Maduro. The administration says it will re-evaluate partnership with Anthropic.

February 24: Anthropic releases a new version of its Responsible Scaling Policy (RSP) that removes the company's previous commitment to "pause" training of new AI models if safety can't be guaranteed.

February 24: Hegseth meet Amodei and issues an ultimatum to remove AI safeguards and give unfettered access to its models by February 27 by 5.01pm or face harsh penalties. He warns that the administration will either invoke the Defense Production Act to force the removal of the safeguards or will designate the firm as a "supply-chain risk".

February 26: Amodei issues a statement, saying the company "cannot in good conscience accede to their (the defense department's) request".

"Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place," the Anthropic chief says. "Should the department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions."

February 27: Trump directs all federal agencies to immediately stop using Claude AI tools. He also announces a six-month phase-out period for agencies like the department of war to wind down all government work.

Hegseth also announces that Anthropic will be designated a "supply-chain risk to national security" effective immediately. The designation bars contractors, suppliers or partners who do business with the US military from conducting  commercial activity with Anthropic.

February 27: Anthropic says the move will set a "dangerous precedent for any American company that negotiates with the government".

"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons," it says.

February 28: Hours after Trump's order, OpenAI strikes an agreement with the administration for deploying advanced AI systems in classified environments. The ChatGPT maker claims the agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.

OpenAI CEO Sam Altman says two of the company's most important safety principles are prohibitions on using its technology for domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. However, he allows the Pentagon to use the company's AI systems for any “lawful purpose”.

February 28- March 2: The General Services Administration (GSA), which serves as the central procurement backbone for federal agencies, takes immediate steps to support Trump's directive. It removes Anthropic from long-term governmentwide contracts and USAi.gov, a platform that allows federal agencies to test various AI models.

The department of the treasury and the Federal Housing Finance Agency announce terminating all use of Claude. Other government agencies such as the departments of state, health and human services (HHS) issue similar directives.

February 28: It is revealed that the administration used Anthropic's AI tools to assist in planning and executing strikes against Iran.

March 2: OpenAI amends its agreement amid mounting criticism and backlash from employees and consumers. The deal explicitly prohibits the intentional use of OpenAI's AI systems for domestic surveillance of US persons.

The company's services will not be used by Department of War intelligence agencies like the NSA. Any services to these agencies will require a follow-on modification to the contract.

March 4: Anthropic receives a letter from the Department of War designating the company as a supply-chain risk.

March 5: Amodei apologises for a leaked memo criticising the Trump administration following the designation. "I apologise for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation," he says.

He, however, adds the company sees "no choice but to challenge it in court". He also says the department's letter is narrow in scope and applies only to customers' use of Anthropic's AI tools in military contracts.

March 6: Microsoft, Google and Amazon, three of the leading cloud infrastructure vendors, say they will keep working with Anthropic on non-defence projects.

March 7: The Trump administration drafts new rules for civilian AI contracts that will require companies to allow “any lawful” use of their models, according to a Financial Times report.

March 9: Anthropic sues the Department of War. It files two lawsuits — one in the US District Court in the Northern District of California and one in the DC Circuit Court of Appeals.

It also names the president's office and a slew of federal departments, agencies and commissions as defendants, in addition to officials including Hegseth, treasury secretary Scott Bessent and secretary of state Marco Rubio.

March 9: A group of 37 researchers and engineers from OpenAI and Google, including Google Chief Scientist Jeff Dean, file an amicus brief in their personal capacities in support of Anthropic.

It says the "red lines" Anthropic requested, including prohibitions on mass domestic surveillance and the development of autonomous lethal weapons, are legitimate concerns that require sufficient guardrails.

"If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond," the researchers say in their brief.

"It will chill open deliberation in our field about the risks and benefits of today’s AI systems," they added.

March 9: The White House is preparing an executive order formally instructing the federal government to remove Anthropic's AI from its operations, an Axios report says.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Vikas SN
Vikas SN covers Big Tech, streaming, social media and gaming industry
first published: Mar 10, 2026 11:49 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347