OpenAI’s Superalignment team, responsible for developing ways to govern and steer “superintelligent” AI systems, faced significant challenges in obtaining the necessary compute resources to carry out their work. Despite being promised 20% of the company’s compute resources, the team’s requests were often denied, hindering their progress. This issue, along with others, led to several team members, including co-lead Jan Leike, resigning from the company.
Leike, a former DeepMind researcher involved in the development of ChatGPT and other models, expressed his reasons for resigning. He believed that OpenAI’s core priorities were not aligned with the necessary preparations for future AI advancements. Leike emphasized the importance of focusing on security, monitoring, safety, alignment, and societal impact to ensure that AI development proceeds in a responsible manner. He expressed concerns that OpenAI was not on the right trajectory to address these critical aspects.
Building smarter-than-human machines carries inherent dangers, and OpenAI shoulders a significant responsibility on behalf of humanity. Leike’s departure and concerns about resource allocation prompted questions about OpenAI’s commitment to its mission.
OpenAI formed the Superalignment team with the goal of tackling the technical challenges of controlling superintelligent AI within four years. Led by Leike and co-founder Ilya Sutskever, the team included scientists and engineers from OpenAI and other organizations. The team aimed to contribute research on AI safety and collaborate with the broader AI industry through initiatives such as research grants.
While the Superalignment team managed to publish safety research and distribute grants, their efforts were overshadowed by increasing product launches that demanded the company leadership’s attention. The team found themselves having to fight for upfront investments crucial to achieving OpenAI’s mission.
Leike highlighted that safety culture and processes had taken a backseat to product development in recent years. Additionally, Sutskever’s conflict with OpenAI CEO Sam Altman added to the distractions within the company. Sutskever played a vital role in the Superalignment team, not only contributing research but also acting as a bridge between different divisions within OpenAI.
After Leike and Sutskever’s departures, John Schulman, another OpenAI co-founder, assumed responsibility for the work previously undertaken by the Superalignment team. However, instead of a dedicated team, the work will now be carried out by a loosely associated group of researchers embedded in various divisions throughout the company. This integration aims to ensure closer collaboration but raises concerns that OpenAI’s AI development may not prioritize safety as effectively.
The recent events surrounding the Superalignment team’s departures and the reshaping of AI research at OpenAI have sparked discussions about the company’s commitment to safety and its ability to address the challenges of developing superintelligent AI. It remains to be seen how OpenAI will navigate these concerns and continue its mission to develop AI for the benefit of all humanity.