Moneycontrol
HomeTechnologyAnthropic bans nuclear, chemical weapons chats with Claude AI

Anthropic bans nuclear, chemical weapons chats with Claude AI

Amid growing scrutiny of AI safety, Anthropic has updated its usage policy for Claude, expanding restrictions on dangerous applications and reinforcing safeguards against misuse.

August 16, 2025 / 16:51 IST
Story continues below Advertisement
Anthropic Claude AI

Anthropic has revised its usage policy for Claude, its family of AI chatbots, in a bid to address increasing concerns around safety and misuse. The update broadens the scope of prohibited applications, particularly around weapons development, and introduces stricter measures against potential cyber threats.

Previously, Anthropic’s rules barred users from leveraging Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” The new version makes the language more explicit, directly banning the use of Claude to develop high-yield explosives, as well as biological, nuclear, chemical, and radiological (CBRN) weapons.

Story continues below Advertisement

The policy shift comes just months after the company rolled out “AI Safety Level 3” protections in May, alongside the launch of its Claude Opus 4 model. These safeguards are designed to make Claude more resistant to jailbreak attempts while reducing the risk that it could be manipulated into assisting with the creation of CBRN weapons.

Anthropic is also drawing attention to the risks posed by its more advanced, agentic AI tools. Features like Computer Use — which allows Claude to directly operate a user’s machine — and Claude Code, which embeds the chatbot into a developer’s terminal, create new avenues for abuse. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” the company wrote.