The startup Safe Superintelligence was founded by Ilya Sutskever, a co-founder and former chief scientist at OpenAI, Daniel Levy, a former OpenAI researcher, and Daniel Gross, the former head of artificial intelligence projects at Apple. Little is known about the company's activities to date, with a brief statement on their website stating that the company was established with a single purpose: to create safe superintelligence.
The company has been operating since June of last year, has not yet generated any revenue, but has already raised $1 billion in investment. According to the text on their website, generating revenue is not even one of their short-term goals. For now, they are solely focused on research, with two locations in Palo Alto and Tel Aviv.
Ilya Sutskever is a widely recognized artificial intelligence researcher who has won awards primarily in the field of deep learning. Born in the Soviet Union, he immigrated to Israel and later to Canada. As a researcher at Google, he developed the sequence-to-sequence model, which has become a cornerstone of machine translation. At OpenAI, he helped create the GPT (Generative Pre-trained Transformer) model.
He left OpenAI due to a deteriorating relationship with Sam Altman. The disagreement stemmed from Sutskever’s belief that Altman was pushing progress too far, too fast. He played a role in Altman's brief ousting but later publicly apologized and resigned from his position on the board. Sutskever remained at OpenAI as a researcher for a while longer, but his relationship with Altman remained strained. His departure may have been due to ongoing disagreements, and his new company, Safe Superintelligence, may reflect his ambition to create safe superintelligence free from commercial pressures—an ideal that was a core principle of OpenAI when it was founded in 2015.