Summary:
Ilya Sutskever co-founded Safe Superintelligence (SSI) to develop safe AI.
SSI is set for a $30 billion valuation, significantly up from $5 billion.
The company aims to create safe superintelligence without a market-ready product yet.
Greenoaks Capital Partners is leading a $500 million investment round.
Sutskever's transition follows his controversial exit from OpenAI amid safety concerns.
The Vision Behind Safe Superintelligence
The OpenAI co-founder Ilya Sutskever has embarked on a new journey with his startup, Safe Superintelligence (SSI), which aims to develop AI that can outsmart humans safely. Despite its ambitious goals, the company currently has no revenue.
Backstory and Funding
Founded in June 2022 by Sutskever, Daniel Gross, and Daniel Levy, SSI has caught the attention of investors. Reports suggest that Greenoaks Capital Partners, a San Francisco-based VC firm, is leading a funding round with an investment of $500 million. This would significantly boost SSI's valuation from $5 billion in September to over $30 billion.
Comparison with Industry Giants
This new valuation would position SSI among the most valuable private AI companies, alongside others like Anthropic and Perplexity, which are valued at around $60 billion and $9 billion, respectively. Notably, unlike these competitors, SSI does not have a market-ready product yet.
Company’s Objectives
The company’s mission is clear: to create a safe superintelligence. Their approach emphasizes safety and capabilities as intertwined technical challenges, aiming for revolutionary advancements while prioritizing safety. Their website states, "We plan to advance capabilities as fast as possible while ensuring our safety always remains ahead."
Sutskever's Journey
Sutskever's transition to SSI follows his tenure as chief scientist at OpenAI, where he was instrumental in the company's superalignment efforts. However, after a series of events, including the dismissal and subsequent reinstatement of CEO Sam Altman, Sutskever stepped down from OpenAI’s board, expressing regret over his involvement in the board's decision to fire Altman.
Safety Concerns in AI Development
The departure of former alignment head Jan Leike from OpenAI was also tied to safety concerns, highlighting the ongoing debate regarding the balance between innovation and safety in AI development. Sutskever's new venture appears to be a response to these pressing issues, focusing on a future where AI can be both powerful and safe.
Comments