Home ai Ilya Sutskever Reveals Next Project: Building Safe Superintelligence with Startup Safe Superintelligence...

Ilya Sutskever Reveals Next Project: Building Safe Superintelligence with Startup Safe Superintelligence Inc.

Ilya Sutskever, the former chief scientist at OpenAI, has announced his next project after leaving the company. Alongside his OpenAI colleague Daniel Levy and Daniel Gross, the former AI lead at Apple and co-founder of Cue, Sutskever is now working on a startup called Safe Superintelligence Inc. The goal of this new venture is to build safe superintelligence, which the founders describe as “the most important technical problem of our time.” They believe that safety and capabilities should be approached together, solving them through revolutionary engineering and scientific breakthroughs. The team plans to advance capabilities rapidly while ensuring that safety always remains a priority.

Superintelligence refers to a hypothetical agent that possesses intelligence far surpassing that of the smartest human. This concept builds on Sutskever’s previous work at OpenAI, where he was part of the superalignment team responsible for designing methods to control powerful AI systems. However, after Sutskever’s departure, the group was disbanded, a decision that received criticism from Jean Leike, one of the former leads.

SSI aims to pursue safe superintelligence with a singular focus, goal, and product. The founders are enthusiastic about the company’s potential and are inviting others who share their vision to join the team. They emphasize the importance of working in a small, dedicated, and trustworthy team that can achieve remarkable results.

It’s worth noting that Sutskever’s involvement in OpenAI has been eventful. He played a significant role in the ousting of CEO Sam Altman in November 2023, an action he later expressed regret for. Now, with Safe Superintelligence Inc., Sutskever is dedicated to tackling the challenge of building safe superintelligence head-on.

Overall, this new startup represents an exciting endeavor in the field of AI. By focusing on the development of safe superintelligence, SSI aims to address one of the most critical technical problems of our time while prioritizing safety and innovation. With the combined expertise of Sutskever, Levy, and Gross, this startup has the potential to make significant contributions to the field of AI and shape the future of intelligent systems.

Exit mobile version