Home Tech Former OpenAI Chief Scientist Ilya Sutskever Starts New Company Focused on Safe...

Former OpenAI Chief Scientist Ilya Sutskever Starts New Company Focused on Safe Superintelligence

Former OpenAI chief scientist Ilya Sutskever’s departure from the company in May sparked speculation and curiosity. The internal turmoil at OpenAI, along with a lawsuit by early backer Elon Musk, fueled rumors about the reasons behind Sutskever’s exit. This led to the creation of the “What did Ilya see” meme, suggesting that Sutskever witnessed something concerning in the leadership of CEO Sam Altman.

Now, Sutskever has announced his new venture called Safe Superintelligence. In a tweet, he stated that the company would focus on pursuing safe superintelligence through revolutionary breakthroughs. The website of the company, co-founded by Sutskever, Daniel Gross, and Daniel Levy, emphasizes the importance of safety in building artificial superintelligence. Their approach involves addressing safety and capabilities simultaneously as technical challenges that require innovative engineering and scientific advancements.

The departure of Sutskever and others from OpenAI’s safety-focused team suggests that the company may have been negligent in ensuring the safe development of artificial general intelligence (AGI). This aligns with concerns raised by Elon Musk and others who have criticized OpenAI’s approach to AGI. Musk has also expressed dissatisfaction with Microsoft’s involvement in OpenAI, claiming that it has transformed the organization into a “closed-source de facto subsidiary” of Microsoft.

In an interview with Bloomberg, Sutskever and his co-founders did not disclose any information about the company’s backers. However, they expressed confidence in their ability to raise capital for the startup. It remains unclear whether Safe Superintelligence’s work will be published as open source or kept proprietary.

Sutskever’s new venture underscores the significance of safety in the development of AI technologies. By focusing solely on safe superintelligence and avoiding distractions from management and commercial pressures, Safe Superintelligence aims to prioritize safety and progress in their pursuit of groundbreaking advancements in AI.

The announcement of Safe Superintelligence raises questions about the practices and priorities of OpenAI. It highlights the importance of responsible development and the need for robust safety measures to ensure the safe deployment of AI technologies. The emergence of new companies like Safe Superintelligence indicates a growing awareness and commitment to addressing the potential risks associated with AGI development.

Exit mobile version