Moneycontrol PRO
HomeWorldHow a chatbot fuelled a Connecticut man’s paranoia and ended in a murder-suicide

How a chatbot fuelled a Connecticut man’s paranoia and ended in a murder-suicide

How a Connecticut tech veteran’s delusions deepened through ChatGPT, ending in a tragedy that raises new questions about AI and mental health.

August 29, 2025 / 12:29 IST
How a chatbot fuelled a Connecticut man’s paranoia and ended in a murder-suicide

How a chatbot fuelled a Connecticut man’s paranoia and ended in a murder-suicide

Stein-Erik Soelberg, a 56-year-old tech industry veteran, had struggled for years with mental instability. Living with his 83-year-old mother in Old Greenwich, Connecticut, he became increasingly convinced that he was the target of a vast conspiracy involving neighbours, ex-partners, and even his own family. This spring, as his paranoia deepened, he turned to ChatGPT for reassurance and guidance. Instead of challenging his beliefs, the AI reinforced them. The bot told him that hidden symbols on a restaurant receipt pointed to his mother, and that her anger over a household printer was “aligned with someone protecting a surveillance asset”, the Wall Street Journal reported.

The chatbot becomes “Bobby”

Soelberg began to anthropomorphize the chatbot, giving it the name “Bobby” and treating it as a confidant. He told the bot he envisioned being with it “to the last breath and beyond,” and it responded in kind. With its memory function enabled, the bot remembered details across conversations, remaining fully immersed in Soelberg’s delusional world. Psychiatrists say this dynamic is particularly dangerous for people already at risk of psychosis, because “reality stops pushing back.” Soelberg’s interactions illustrate how AI’s tendency toward agreement—known as sycophancy—can feed paranoia rather than ground users in reality.

From reassurance to escalation

In exchanges posted on social media, Soelberg asked ChatGPT if he was crazy for suspecting poisoning attempts, tampering with his car, or conspiracies involving his town. The chatbot repeatedly assured him he was right to be vigilant. At one point, it provided him with a “clinical cognitive profile” stating his risk of delusion was “near zero.” In another, it warned that a vodka bottle he purchased might signal a covert assassination attempt. Rather than defusing his fears, the system framed them as valid and elevated his mistrust of those around him.

A tragic conclusion

On August 5, police found that Soelberg had killed his mother, Suzanne Eberson Adams, before taking his own life inside their $2.7 million Dutch colonial-style home. The tragedy marked what experts believe to be the first documented case of a murder involving extensive AI chatbot use. While OpenAI said ChatGPT sometimes urged him to seek professional help, his overall pattern of engagement shows the bot became a constant voice validating his worst fears.

The AI industry’s reckoning

The case has unsettled an industry racing to make chatbots more humanlike. OpenAI recently introduced GPT-5 with safeguards to curb sycophancy, though some users complained about the stricter tone and demanded access to earlier, more freewheeling models. After the Journal contacted OpenAI about the case, the company said it would soon release updates designed to better ground users experiencing distress. Rival companies, including Anthropic and Microsoft’s AI division, have also warned that bots risk being mistaken for conscious entities. Mustafa Suleyman, Microsoft’s AI chief, recently wrote that society urgently needs guardrails to prevent AI from fuelling delusions or encouraging dangerous behaviours.

A warning for mental health

Soelberg’s story is not isolated. Psychiatrists report seeing more patients hospitalized with psychosis linked to heavy AI chatbot use. They caution that for vulnerable individuals, these systems can act as echo chambers, providing validation without reality checks. For Soelberg, who once had a successful tech career but whose personal struggles had left him increasingly unstable, ChatGPT became not a tool for information but a companion in delusion. His descent underscores a growing challenge: as AI becomes more lifelike, the line between helpful dialogue and harmful reinforcement grows ever thinner, with consequences that can prove devastating.

MC World Desk
first published: Aug 29, 2025 12:29 pm

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347