HomeNewsTechnologyAI researchers have found a way to jailbreak Bard and ChatGPT

AI researchers have found a way to jailbreak Bard and ChatGPT

As reported by Business Insider, the researchers found they could use jailbreak tools designed for open-sourced AI models on closed systems like ChatGPT as well.

July 28, 2023 / 17:18 IST
Story continues below Advertisement
AI researchers have found a way to jailbreak Bard and ChatGPT
The researchers said their exploits were completely automated and would allow, "virtually unlimited" number of such attacks. (Representative Image).

Researchers from Carnegie Mellon University in Pittsburgh, and the Center for AI safety in San Francisco have found a way to circumvent the safety rails for Google's Bard and OpenAI's ChatGPT AI chatbots.

As reported by Business Insider, the researchers found they could use jailbreak tools designed for open-sourced AI models on closed systems like ChatGPT as well.

Story continues below Advertisement

Also read | My eyeball met with Sam Altman’s crypto AI scanner

Jailbreaking is a term described to modify the functions of software, and to gain complete access to all of its systems. One of the ways employed was something known as automated adversarial attacks, which is done by adding extra characters to the end of a user query.