Moneycontrol PRO
HomeTechnologyAnthropic bans nuclear, chemical weapons chats with Claude AI

Anthropic bans nuclear, chemical weapons chats with Claude AI

Amid growing scrutiny of AI safety, Anthropic has updated its usage policy for Claude, expanding restrictions on dangerous applications and reinforcing safeguards against misuse.

August 16, 2025 / 16:51 IST
Anthropic Claude AI

Anthropic has revised its usage policy for Claude, its family of AI chatbots, in a bid to address increasing concerns around safety and misuse. The update broadens the scope of prohibited applications, particularly around weapons development, and introduces stricter measures against potential cyber threats.

Previously, Anthropic’s rules barred users from leveraging Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” The new version makes the language more explicit, directly banning the use of Claude to develop high-yield explosives, as well as biological, nuclear, chemical, and radiological (CBRN) weapons.

The policy shift comes just months after the company rolled out “AI Safety Level 3” protections in May, alongside the launch of its Claude Opus 4 model. These safeguards are designed to make Claude more resistant to jailbreak attempts while reducing the risk that it could be manipulated into assisting with the creation of CBRN weapons.

Anthropic is also drawing attention to the risks posed by its more advanced, agentic AI tools. Features like Computer Use — which allows Claude to directly operate a user’s machine — and Claude Code, which embeds the chatbot into a developer’s terminal, create new avenues for abuse. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” the company wrote.

The tightened policy underscores how AI companies are under mounting pressure to ensure their models cannot be exploited for harmful purposes. By explicitly naming some of the world’s most dangerous weapons and flagging the cyber risks of agentic AI, Anthropic is signalling that it wants to stay ahead of both regulators and malicious actors.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

MC Tech Desk Read the latest and trending tech news—stay updated on AI, gadgets, cybersecurity, software updates, smartphones, blockchain, space tech, and the future of innovation.
first published: Aug 16, 2025 04:51 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347
CloseOutskill Genai