Anthropic has revised its usage policy for Claude, its family of AI chatbots, in a bid to address increasing concerns around safety and misuse. The update broadens the scope of prohibited applications, particularly around weapons development, and introduces stricter measures against potential cyber threats.
Previously, Anthropic’s rules barred users from leveraging Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” The new version makes the language more explicit, directly banning the use of Claude to develop high-yield explosives, as well as biological, nuclear, chemical, and radiological (CBRN) weapons.
The policy shift comes just months after the company rolled out “AI Safety Level 3” protections in May, alongside the launch of its Claude Opus 4 model. These safeguards are designed to make Claude more resistant to jailbreak attempts while reducing the risk that it could be manipulated into assisting with the creation of CBRN weapons.
Anthropic is also drawing attention to the risks posed by its more advanced, agentic AI tools. Features like Computer Use — which allows Claude to directly operate a user’s machine — and Claude Code, which embeds the chatbot into a developer’s terminal, create new avenues for abuse. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” the company wrote.
The tightened policy underscores how AI companies are under mounting pressure to ensure their models cannot be exploited for harmful purposes. By explicitly naming some of the world’s most dangerous weapons and flagging the cyber risks of agentic AI, Anthropic is signalling that it wants to stay ahead of both regulators and malicious actors.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!