Sutskever's SSI leads new, safety-first superintelligence quest.
Leaving OpenAI, Ilya Sutskever's SSI raises billions to singularly pursue safe superintelligence, prioritizing ethics over revenue.
July 3, 2025

Ilya Sutskever, a pivotal figure in the artificial intelligence revolution and co-founder of OpenAI, has sent a clear and confident message to the technology world regarding his new venture, Safe Superintelligence Inc. (SSI). Following the departure of co-founder Daniel Gross, Sutskever has taken the helm as CEO, declaring, "We have the compute, we have the team, and we know what to do."[1][2][3] This statement of intent comes amidst a backdrop of high-stakes personnel changes and rebuffed acquisition offers, signaling SSI's unwavering focus on its singular and ambitious goal: the creation of a safe superintelligence. The company, launched in June 2024, is deliberately positioning itself apart from the commercial AI race, prioritizing long-term safety over short-term product cycles and revenue.[4][5][6] With substantial funding and a team of top-tier talent, SSI is poised to become a formidable force in shaping the future of advanced AI.[7][8][5]
The founding of SSI by Sutskever, alongside Daniel Gross and Daniel Levy, marked a significant moment in the AI industry.[9][7][10] It emerged from a philosophical rift within the AI community, particularly concerns that commercial pressures at companies like OpenAI were overshadowing the critical importance of safety.[9][11][12] Sutskever’s departure from OpenAI in May 2024, following a contentious board dispute, underscored his commitment to a safety-first approach.[9][11] SSI was born with a clear mandate: to be the "world's first straight-shot SSI lab, with one goal and one product: a safe superintelligence."[4][13][14] This mission is embedded in the company's name and its entire operational roadmap.[13] The company established a dual presence in Palo Alto, California, and Tel Aviv, Israel, two major hubs for AI talent, to recruit a small, elite team of researchers and engineers.[15][9][5] This lean structure is intentional, designed to avoid the distractions of management overhead and product cycles that can divert focus from the core mission.[4][13]
SSI's approach is a direct challenge to the prevailing paradigm in AI development. Instead of building powerful capabilities first and then attempting to retrofit safety measures, SSI is treating safety and capabilities as intertwined technical problems to be solved in tandem.[4][11][13] The company plans to advance its AI's capabilities as rapidly as possible, but always ensuring that safety protocols remain ahead.[13] This philosophy has resonated with investors, who have poured billions into the pre-product startup. In September 2024, SSI announced it had raised $1 billion from prominent venture capital firms including Andreessen Horowitz, Sequoia Capital, and DST Global.[9][7] By March 2025, a further funding round reportedly pushed the company's valuation to a staggering $32 billion, a sixfold increase in less than a year, despite having no revenue and a small team.[9][8][5] This extraordinary financial backing, which includes investments from major tech players like Alphabet and Nvidia, demonstrates immense confidence in Sutskever's vision and the technical prowess of his team.[8][5]
The implications of SSI's mission extend far beyond its own research labs. The company’s existence serves as a powerful statement about the future direction of AI. By creating an organization solely dedicated to the safe development of superintelligence, Sutskever has elevated the importance of AI alignment and safety to the highest priority.[11][12] This could influence how other AI labs structure their research agendas and may catalyze a broader shift in the industry toward more responsible innovation.[11][5] SSI's deliberate decision to remain independent, even reportedly rebuffing an acquisition offer from Meta, reinforces its commitment to its long-term vision, insulated from the short-term commercial pressures that Sutskever sought to escape.[9][1][3] The company is also making strategic technological choices, opting to develop its models on Google's Tensor Processing Units (TPUs), a departure from the more common use of Nvidia's GPUs.[8] This partnership with Google Cloud, which reportedly makes SSI its most significant external TPU customer, gives the company access to cutting-edge hardware to power its ambitious research.[8][16]
In conclusion, Ilya Sutskever's leadership at Safe Superintelligence Inc. represents a potential inflection point for the artificial intelligence industry. His confident assertion that the company has all the necessary components for success—compute power, a skilled team, and a clear strategy—is backed by unprecedented levels of investment and a deliberate, safety-focused approach.[1][2] SSI's singular mission to create a safe superintelligence, free from the immediate demands of commercialization, sets it apart in a field often characterized by a race to market.[5][6] The company’s journey will be closely watched, as its success or failure could profoundly impact not only the trajectory of AI technology but also the global conversation around its governance and ethical implementation, ensuring that the development of machines far smarter than humans ultimately serves the best interests of humanity.[11][5][17]
Sources
[2]
[4]
[6]
[7]
[10]
[11]
[12]
[13]
[14]
[15]
[16]