Ilya Sutskever, the co-founder and former chief scientist at OpenAI, who played a role in the board's decision to oust Sam Altman as CEO in November 2023, has co-founded a new artificial intelligence (AI) company called Safe Superintelligence (SSI).
The startup aims to develop super intelligent machines—machines more intelligent than humans—in a manner that ensures safety.
“I am starting a new company,” Sutskever said in a post on X. “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”
On a minimalist website, co-founders Sutskever, former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy said, "Building safe superintelligence (SSI) is the most important technical problem of our time."
Artificial superintelligence is a hypothetical software-based AI system with an intellectual scope beyond human intelligence. At the most fundamental level, this super intelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human.
SSI has offices in Palo Alto and Tel Aviv, where it is currently recruiting technical talent. Sutskever, however, did not disclose who the investors in the company are or whether it has raised any funds.
"We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else," the company said.
Sutskever has long advocated for tackling the complexities of AI safety. In a 2023 blog post co-authored with Jan Leike, co-lead of OpenAI's Superalignment team, the duo predicted that AI with intelligence surpassing humans could emerge within the next decade. In the post they underscored the urgency of researching methods to manage and restrict potentially unfriendly AI.
OpenAI’s Superalignment team was focussed on controlling AI systems but was dissolved shortly after Sutskever and Leike left the company. Leike now heads a team at rival AI startup Anthropic.
The company on its website said, "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
In an interview with Bloomberg, Sutskever said, “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then.”
“It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race,” he added.
Sutskever quit OpenAI in May this year. Last year, reports suggested that he had concerns about the safety of artificial general intelligence (AGI) and the rapid pace of advancement at OpenAI, leading to tensions with Altman.
Also read: A timeline of Sam Altman's ouster and return to OpenAI
On November 17, 2023, Sutskever and fellow board members removed Altman from his CEO position. However, by November 22, 2023, Altman was reinstated as CEO after the board's decision was overturned. Sutskever was removed from the company's board after Altman returned.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
