Who is Ilya Sutskever?

Ilya Sutskever is a towering figure in artificial intelligence, renowned for his pioneering work in deep learning. As a co-founder of OpenAI and its former Chief Scientist, Ilya played a critical role in developing major AI models like GPT-2, GPT-3, and DALL-E. His contributions have profoundly influenced the field of AI, making him one of its most respected voices.

The OpenAI Saga

Ilya was at the heart of a major upheaval at OpenAI, which saw the dismissal of CEO Sam Altman. This internal conflict, shrouded in mystery, is believed to have been driven by concerns over AI safety—a core issue for Ilya. Following these events, Ilya resigned from OpenAI, leading to widespread speculation and curiosity about his next steps.

The Emergence of Safe Superintelligence

Ilya broke his silence by announcing the formation of Safe Superintelligence Inc. (SSI). This new venture aims to address one of the most critical challenges in AI: the safe development of superintelligent systems—AI that surpasses human capabilities in all domains.

If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.
Now is the time. Join us.

Ilya Sutskever, Daniel Gross, Daniel Levy
June 19, 2024

The Mission of SSI

SSI’s mission is clear: to ensure that the rise of superintelligent AI is a force for good and does not harm humanity. The initiative is dedicated to creating superintelligent systems that are aligned with human values and safety principles. According to Ilya, “At the most basic level, safe superintelligence should have the property that it will not harm humanity on a large scale. After this, we can say we would like it to be a force for good.”

Why SSI Matters

The creation of SSI is a significant development in the AI landscape. Ilya’s departure from OpenAI and his new venture underscore a growing concern about the safety and ethical implications of advanced AI. SSI is set to operate with a focused, undistracted approach reminiscent of OpenAI’s early days, prioritizing safety, security, and progressive advancements in AI.

The Road Ahead

Although SSI is still in its early stages, its formation marks a pivotal moment in AI research. The commitment to building superintelligence safely suggests that significant developments and possibly major funding announcements are on the horizon. Given Ilya’s influence and expertise, SSI is set to play a crucial role in shaping the future of AI.

Conclusion

Ilya Sutskever’s return to the forefront of AI with Safe Superintelligence is a bold and timely move. As the world inches closer to the reality of superintelligent AI, initiatives like SSI will be essential in ensuring these advancements benefit humanity as a whole.

References

Safe Superintelligence Inc. (ssi.inc)

TED-talk where Ilya talks about exciting but perilous journey to AGI (October 2023):