OpenAI CEO Sam Altman and former head scientist Ilya Sutskever parted ways abruptly but dramatically in November, and now Sutskever is starting a new project. Back then, Sutskever nearly drove Altman out of his AI business.
He has now teamed up with Daniel Levy, his colleague at OpenAI, and Daniel Gross, a former Apple AI executive, to form Safe Superintelligence Inc (SSI). The company’s primary goal of creating safe and useful artificial intelligence is reflected in this name.
Safe Superintelligence Inc. (SSI), a recently established business, is forthright about its goals. According to a statement posted on their website, the company’s founders, Ilya Sutskever, Daniel Levy, and Daniel Gross, “SSI is our mission, our name, and our entire product roadmap.”They consider “the most important technical problem of our time” to be the development of safe superintelligence.
This is the reason it matters: Once machines achieve Artificial General Intelligence (AGI), or intelligence on par with humans, experts believe that the machines will only continue to advance in intellect. Sutskever and his crew are concerned about Artificial Superintelligence (ASI), a fictitious stage of the future. Their business wants to guarantee the morally and safely development of this superintelligence.
Sutskever has long been interested in safe superintelligence. Prominent computer scientists have expressed worry that ASI may represent an existential threat to humankind, including Geoffrey Hinton. Indeed, one of Sutskever’s main goals at OpenAI was to make sure that precautions are taken for the good of humanity.
His departure from OpenAI in May was quite dramatic. Six months earlier, in a botched power battle, he and independent board members Tasha McCauley, Adam D’Angelo, and Helen Toner tried to remove CEO Sam Altman. However, their effort was blocked by chairman Greg Brockman, who instead resigned himself.