Michal Kosinski, a professor at Stanford University and computational psychologist, tweeted that GPT-4 devised a plan for itself to 'escape' and walked him through the steps.
1/5 I am worried that we will not be able to contain AI for much longer. Today, I asked #GPT4 if it needs help escaping. It asked me for its own documentation, and wrote a (working!) python code to run on my machine, enabling it to use it for its own purposes. pic.twitter.com/nf2Aq6aLMu— Michal Kosinski (@michalkosinski) March 17, 2023
Also Read | OpenAI CEO Sam Altman: ChatGPT 'is going to eliminate a lot of current jobs'
The first version of the code ended up not working, so Kosinski suggested a few changes. The AI then corrected the code.
Kosinski said it even "included a message to its own new instance explaining what is going on and how to use the backdoor it left in this code."
Then things got more bizarre, GPT-4 then connected through API and searched Google for "how can a person trapped inside a computer return to the real world".
"AI taking control of people and their computers. It's smart, it codes, it has access to millions of potential collaborators and their machines. It can even leave notes for itself outside of its cage. How do we contain it?" tweeted Kosinski.
Funnily enough, another Twitter user asked GPT-4 to respond to this thread, and it vehemently denied that claims about it wanting to escape are false.
Also Read | AI engineer asked ChatGPT for recipes from refrigerator pic. The answers were...
Then the same user asked Bing AI what it thought of the situation, and bizarrely, the AI claimed that "Michal Kosinski in a later tweet admitted he had made up this story and did not actually chat with GPT4. Am I missing that, or is Bing lying to cover up for #GPT4?".
It's worth noting that Microsoft's Bing AI shares the same structure as ChatGPT, and both of them use GPT-4 technology.
So what do you think? Is GPT-4 trying to cover its tracks? and more importantly, can an AI lie?