Safe Superintelligence (SSI), a new AI-focused startup co-founded by former OpenAI chief scientist Ilya Sutskever, has secured $1 billion in funding to develop advanced AI systems. With just 10 employees, SSI aims to build a small, highly trusted team of researchers and engineers split between Palo Alto, California, and Tel Aviv, Israel. According to a Reuters report, the funding will be used to acquire significant computing power and attract top talent, underlining the ongoing interest from investors in foundational AI research despite challenges in the sector.
Although SSI’s valuation is undisclosed, sources indicate it is valued at $5 billion. The company has drawn support from prominent venture capital firms like Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, alongside an investment partnership led by Nat Friedman and SSI’s CEO Daniel Gross. Gross emphasized the importance of working with investors who support SSI’s mission to focus on AI safety and spend several years on research and development before bringing their products to market.
AI safety, which involves ensuring that AI systems do not harm humanity, is a critical issue as concerns about rogue AI continue to grow. Sutskever, who co-founded SSI in June with Gross and former OpenAI researcher Daniel Levy, has shifted his focus from his previous work at OpenAI to pursue a new direction in AI development. He highlighted the importance of taking a different approach to scaling AI models, which he believes can lead to unique advancements.
SSI is placing a strong emphasis on building a cohesive team with shared values, prioritizing candidates with exceptional abilities and a genuine interest in the work over those drawn to industry hype. The company is also exploring partnerships with cloud providers and chip companies to meet its computing needs, though no decisions have been made yet. Sutskever, an early proponent of scaling in AI, intends to approach this concept differently at SSI, aiming to achieve something truly special.