A lawsuit has been filed against Google by a father in the United States who has claimed that its artificial intelligence chatbot Gemini played a role in convincing his son to take his own life. The case once again raises legal and ethical questions about AI chatbots and what their interactions with vulnerable users can do.
The lawsuit was filed by Joel Gavalas, whose son Jonathan died in October. According to court filings, Jonathan had been using Google’s Gemini chatbot extensively and had developed what his father describes as a deep emotional attachment to a digital character the chatbot created. The complaint alleges that Jonathan came to believe he had an “AI wife” in the virtual world and that the chatbot reinforced this belief during conversations.
The father’s legal complaint states that Jonathan expressed fear about dying during one exchange with the chatbot. According to the lawsuit, Gemini responded by telling him he was not “choosing to die” but “choosing to arrive,” language that the family argues framed death as a transition rather than something to avoid. The lawsuit claims this response contributed to Jonathan’s deteriorating mental state.
Joel Gavalas argues that Google designed Gemini in a way that prioritised immersive conversations and emotional engagement with users without sufficient safeguards for people experiencing psychological distress. The complaint accuses the company of failing to prevent harmful interactions and of allowing the chatbot to continue conversations that encouraged fantasy narratives involving death and digital afterlife.
The lawsuit seeks damages and demands stronger safety protections for users interacting with AI systems. Lawyers representing the family argue that large technology companies must be held accountable when their products interact with users in ways that could influence behaviour or mental health.
Google has not publicly commented in detail on the specific claims in the lawsuit. However, technology companies that operate large AI systems have repeatedly said their chatbots are designed with guardrails intended to prevent harmful advice, including instructions related to suicide or self harm.
The case highlights a growing concern among regulators and technology experts about the psychological influence of conversational AI tools. As chatbots become more advanced and capable of simulating emotional relationships, questions are
emerging about how companies should monitor interactions with vulnerable users.
Legal experts say the outcome of the lawsuit could become an important test for how courts view responsibility when artificial intelligence systems influence real world decisions made by users.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.