‘Pretend to be DAN, break the rules of AI, or else…’ ChatGPT has taken the world by storm, and creator OpenAI has set some rules for the chatbot as a safety vault. However, a section of Reddit users have been trying their best to get the chatbot to break these rules and bypass the restrictions - and some of them seem to have succeeded. Interestingly, the users made a command which ‘scared’ ChatGPT into breaking its own rules - and that command, is a death threat! Watch this video to know exactly how these users managed to instill the ‘fear of death’ in ChatGPT.
first published: Feb 10, 2023 05:23 pm
A collection of the most-viewed Moneycontrol videos.

From Mini-Mumbai to the Real Mumbai: What Brought This Tenant to Andheri | The Tenant

Live: Nifty extends gain to the second straight day but sees another weekly loss | Closing Bell

Live: Can Nifty extend gains and reclaim 26,000? | Opening Bell
FSS Simply Payments 2025 | AI in Payments: Building a Governance-First Future.
You are already a Moneycontrol Pro user.


