Moneycontrol

Why defense tech companies are asking employees to stop using Anthropic’s Claude AI chatbot

Defense tech companies are restricting the use of Anthropic’s Claude AI chatbot among employees as regulatory concerns and government guidance raise questions about the role of AI tools in military and defense projects.

March 05, 2026 / 14:07 IST
Story continues below Advertisement
Claude.ai
Snapshot AI
  • US labels Anthropic a supply-chain risk for defense projects
  • Defense firms halt use of Claude AI to ensure compliance
  • Lockheed Martin and others are removing Claude from their systems

Defense technology companies are asking employees to stop using the Claude AI chatbot developed by Anthropic after the company was labeled a potential supply-chain risk by the U.S. government. The decision has prompted several defense contractors to distance themselves from the technology, especially those working on projects with the U.S. Department of Defense.

Government designation triggered the move

Story continues below Advertisement

The issue began after the administration of Donald Trump designated Anthropic as a supply-chain risk. Such a classification means companies that work with the U.S. government, particularly in defense projects, must review or remove technologies linked to the flagged company.

Once the designation was announced, firms connected to the U.S. Department of Defense began advising employees to stop using Claude for work-related tasks. Defense companies typically follow strict compliance rules, and any tool linked to a restricted vendor can create legal and contractual complications.