HomeTechnologyAfter OpenAI exit, Ilya Sutskever makes AI safety play with Safe Superintelligence

After OpenAI exit, Ilya Sutskever makes AI safety play with Safe Superintelligence

On a minimalist website, co-founders Sutskever, former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy said, "Building safe superintelligence (SSI) is the most important technical problem of our​​ time."

June 20, 2024 / 11:09 IST
Story continues below Advertisement
Ilya Sutskever has long advocated for tackling the complexities of AI safety.
Ilya Sutskever has long advocated for tackling the complexities of AI safety.

Ilya Sutskever, the co-founder and former chief scientist at OpenAI, who played a role in the board's decision to oust Sam Altman as CEO in November 2023, has co-founded a new artificial intelligence (AI) company called Safe Superintelligence (SSI).

The startup aims to develop super intelligent machines—machines more intelligent than humans—in a manner that ensures safety.

Story continues below Advertisement

“I am starting a new company,” Sutskever said in a post on X. “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.”