Advertising

OpenAI’s Co-Head of Safety Team Resigns Over Priorities, Sparks Concerns About AGI Safety

OpenAI’s co-head of the “superalignment” team, Jan Leike, recently resigned from the company, expressing his disagreement with the company’s leadership about its core priorities. Leike stated that OpenAI’s safety culture and processes had taken a backseat to developing shiny products in recent years. He also mentioned a lack of resources for safety work. In response to Leike’s claims, OpenAI’s CEO Sam Altman and president and co-founder Greg Brockman addressed the issue.

Altman acknowledged that OpenAI has more work to do in terms of prioritizing safety and assured that the company is committed to doing so. Brockman and Altman shared three points in their response. Firstly, they emphasized that OpenAI has raised awareness about artificial general intelligence (AGI) to better prepare the world for its implications. They highlighted their efforts in scaling up deep learning, analyzing its implications, and calling for international governance of AGI.

Secondly, Brockman and Altman stated that the company is actively building foundations for the safe deployment of AI technologies. They cited the example of ChatGPT-4, which was released in March 2023, and mentioned continuous improvements in model behavior and abuse monitoring based on lessons learned from deployment.

The third point made by Brockman and Altman was the acknowledgment that the future will be more challenging than the past. As OpenAI releases new models, they emphasized the need to elevate safety work and mentioned OpenAI’s Preparedness Framework, which predicts and mitigates catastrophic risks.

Looking ahead, Brockman and Altman discussed the integration of OpenAI’s models into the world and how more people will interact with them. They believe this can be done safely but emphasized the need for foundational work. They mentioned the importance of a tight feedback loop, rigorous testing, world-class security, and harmony between safety and capabilities.

OpenAI’s leaders expressed their commitment to researching and working with governments and stakeholders on safety. They acknowledged that there is no proven playbook for navigating the path to AGI and emphasized the importance of empirical understanding and feedback.

The resignations of Jan Leike and OpenAI’s chief scientist Ilya Sutskever have sparked speculation about what top leaders at OpenAI may know. The negative reaction to Brockman and Altman’s statement suggests that it did not dispel any of that speculation. Despite these challenges, OpenAI is moving forward with its next release, ChatGPT-4o, a voice assistant.

The situation at OpenAI highlights the ongoing tension between prioritizing safety in AI development and the desire to push technological advancements. While OpenAI has made efforts to raise awareness about AGI and develop safe deployment strategies, the resignations and criticisms from Leike and Sutskever indicate that there is still work to be done in balancing these priorities. Moving forward, it will be crucial for OpenAI to address these concerns and continue collaborating with stakeholders to ensure safety in AI development.