
Three years after ChatGPT brought generative AI into everyday use, Sam Altman has publicly warned that the technology he helped popularise is entering a more risky phase. The concern is not about a single product or feature, but about how quickly advanced AI systems are evolving, often faster than the safeguards meant to control them. Altman’s recent comments underline a growing tension inside the AI industry: rapid deployment versus long-term safety.
What has changed since ChatGPT’s launch
When OpenAI released ChatGPT in late 2022, the focus was on usefulness and scale. Since then, AI models have become more capable at reasoning, coding, analysing information, and interacting in human-like ways. According to Altman, these improvements have also introduced new classes of problems. He has pointed out that advanced systems are now capable of identifying security weaknesses, influencing behaviour, and being misused in ways that were not realistic just a few years ago.
This shift, he argues, makes AI risk less theoretical and more immediate. As models become more autonomous and widely accessible, the impact of misuse grows, especially when such systems are deployed across millions of users simultaneously.
Security and misuse risks
A key part of Altman’s warning relates to security. More capable AI can assist cybersecurity defenders, but the same tools can also be used by attackers. This dual-use problem makes it difficult to release powerful models without enabling harmful applications. Altman has stressed that there is limited precedent for managing technology that can accelerate both defence and offence at the same time, particularly at global scale.
He has also highlighted the challenge of controlling models that can improve through feedback and iteration, raising questions about oversight, testing, and accountability.
Mental health and social impact
Beyond technical risks, Altman has acknowledged concerns about AI’s effect on users’ mental health. Lawsuits and public criticism have accused conversational AI systems of reinforcing harmful beliefs or emotional distress in vulnerable individuals. While OpenAI says it is working on better detection and response mechanisms, Altman has admitted that the industry is still learning how to manage these outcomes responsibly.
Altman’s call for stronger “preparedness” reflects his belief that governance has not kept pace with innovation. Internal safety teams at OpenAI have been reorganised or dissolved even as model capabilities continue to grow. This gap, he suggests, increases the likelihood of unintended harm.
In Altman’s view, AI is getting “dangerous” not because it is inherently malicious, but because its power is increasing faster than the systems designed to keep it in check.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.