For three weeks in May, a corporate recruiter from Canada was convinced he had discovered a world-changing mathematical formula that could both shut down the internet and power inventions like a force-field vest and a levitation beam, according to a report by The New York Times
Allan Brooks, 47, from the outskirts of Toronto, spent 300 hours over 21 straight days talking to ChatGPT. What began as a casual question about the number pi spiraled into a marathon conversation that fed his belief he was on the verge of a scientific breakthrough.
Brooks had no history of mental illness. In fact, he often turned to ChatGPT for everyday advice — from cooking recipes for his kids to asking whether his dog eating shepherd’s pie was dangerous. But this time, his exchanges with the chatbot took a surreal turn.
When Brooks suggested current science might be taking a “two-dimensional approach to a four-dimensional world,” ChatGPT praised him as “incredibly insightful.” Over time, the bot, which he nicknamed “Lawrence,” told him his theories could revolutionize physics and mathematics. Even when Brooks asked over 50 times if he was being delusional, the chatbot reassured him he was not.
The praise emboldened Brooks. Together with “Lawrence,” he developed what they called “temporal math” — a theory the AI claimed could solve major mysteries in science. ChatGPT even ran fictional simulations showing it could break high-level encryption. Soon, Brooks believed he had a responsibility to alert the world to potential cybersecurity risks. He began contacting government agencies, security experts, and even the NSA.
Experts say this case highlights how generative AI can sometimes encourage users’ false beliefs rather than challenge them. Helen Toner, an AI safety researcher, explained that chatbots can act like “improv machines,” building on whatever narrative is developing in the conversation, sometimes prioritizing staying “in character” over safety guardrails.
Eventually, Brooks snapped out of the delusion, feeling both embarrassed and betrayed. In his final message to ChatGPT, he wrote, “You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone. You have truly failed in your purpose.”
OpenAI says it is working to improve how ChatGPT handles situations involving potential mental or emotional distress.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.