Former OpenAI researchers, Daniel Kokotajlo and William Saunders, have expressed their disappointment but not surprise at OpenAI’s decision to oppose California’s bill, SB 1047, which aims to prevent AI disasters. Kokotajlo and Saunders had previously raised safety concerns and criticized OpenAI for being in a “reckless” race for dominance.
In a letter shared with Politico, Kokotajlo and Saunders highlight the contradiction between OpenAI’s previous calls for AI regulation and their opposition to the current bill. They urge California Governor Gavin Newsom to sign the bill, emphasizing that with appropriate regulation, OpenAI can still fulfill its mission of building AGI (Artificial General Intelligence) safely.
On the other hand, OpenAI’s rival, Anthropic, has expressed support for the bill while presenting specific concerns and requesting amendments. It is worth noting that some of these concerns have already been incorporated into the bill. Anthropic’s CEO, Dario Amodei, wrote to Governor Newsom, stating that the current bill’s benefits are likely to outweigh its costs, although he stops short of fully endorsing it.
This divide between OpenAI and Anthropic reveals a fundamental difference in their approaches to AI regulation. While OpenAI seems hesitant to embrace regulation that could potentially hinder its competitive advantage, Anthropic acknowledges the importance of regulation and is willing to work with policymakers to ensure the safe development of AI.
The debate around AI regulation is not limited to California. Governments and organizations worldwide are grappling with the challenges posed by the rapid advancement of AI technology. The fear of AI disasters, such as the development of autonomous weapons or the loss of control over superintelligent AI, has fueled the need for regulatory frameworks.
However, finding the right balance between fostering innovation and ensuring safety is a complex task. Overregulation could stifle technological progress, while inadequate regulation could lead to unforeseen consequences. It is crucial for policymakers to collaborate with AI experts and industry leaders to develop effective and adaptable regulations that address the unique challenges of AI.
The support from former OpenAI researchers and rival companies like Anthropic for SB 1047 indicates a growing recognition within the AI community of the importance of responsible development and regulation. As AI technology continues to advance, it is essential for organizations to prioritize safety and ethical considerations to prevent potential AI disasters.
In conclusion, the opposition of OpenAI to California’s bill, SB 1047, has drawn criticism from former researchers who had raised concerns about safety. On the other hand, rival company Anthropic has expressed support for the bill while suggesting amendments. The debate over AI regulation highlights the need for a balanced approach that ensures both innovation and safety. As AI technology progresses, it is crucial for policymakers and industry leaders to collaborate in developing effective regulations that mitigate potential risks and foster responsible AI development.