Geoffrey Hinton, recently awarded the 2024 Nobel Prize in Physics, commented on his former student Ilya Sutskever’s involvement in the controversial dismissal of Sam Altman from OpenAI last November. Speaking at a press conference following the Nobel announcement, Hinton expressed pride in his protégé’s decision-making, referencing Sutskever's role as OpenAI’s former chief scientist.
"I'm particularly proud of the fact that one of my students fired Sam Altman," Hinton stated, referring to Sutskever, who was a pivotal figure in Altman’s removal. Sutskever completed his PhD in computer science under Hinton’s supervision at the University of Toronto in 2013.
Hinton, widely recognised for his foundational work in artificial intelligence and neural networks, reflected on OpenAI’s initial mission. "OpenAI was set up with a big emphasis on safety," he explained. "Its primary objective was to develop artificial general intelligence and ensure that it was safe." However, over time, Hinton suggested that Altman’s priorities shifted, becoming more focused on profits rather than safety. "I think that's unfortunate," he added.
Sutskever’s involvement in Altman’s dismissal was part of a wider dispute within OpenAI that culminated on November 17, 2023, when the board announced Altman’s departure. The board cited a lack of transparency in Altman’s communications as the reason for his removal, though details of the decision remained undisclosed.
Despite his initial role in the ousting, Sutskever later expressed regret and joined other employees in calling for Altman’s reinstatement. He left OpenAI in May 2024 and soon after founded a new venture, Safe Superintelligence Inc., with a focus on responsible AI development.
Hinton, who has previously cautioned about the potential dangers of artificial intelligence, acknowledged Sutskever's early insight into AI's risks. In a March interview with Nikkei Asia, Hinton praised Sutskever for recognising AI’s potential earlier than others. "Ilya changed his mind before me. It turned out he was right," Hinton noted.
Hinton’s concerns about AI have been a central theme in his public appearances, including an interview with CBS' 60 Minutes in October 2023. During the segment, he warned about AI’s potential to manipulate human behaviour, drawing on its ability to learn from vast sources of data, including literature and political strategies.
While Hinton maintains that AI could pose significant risks, he also acknowledged that the timeline for these dangers remains uncertain, estimating that it could take between five and 20 years before AI poses a real threat, while also conceding that such risks might not materialise at all.
Altman’s leadership of OpenAI has faced growing criticism, not just from Sutskever and Hinton, but also from fellow co-founder Elon Musk. Musk, who left OpenAI’s board in 2018, has been vocal about his concerns over the company’s shift towards profit.
In a February 2023 post on X, Musk criticised the direction OpenAI had taken under Altman, remarking, “OpenAI was created as an open-source, non-profit company to serve as a counterweight to Google, but now it has become a closed-source, maximum-profit company effectively controlled by Microsoft."
ALSO READ: Elon Musk takes jab at Apple over OpenAI integration with Indian meme. Viral post
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.