Advertising

Controversial California Bill to Prevent AI Disasters Heads to Governor’s Desk

The future of artificial intelligence (AI) regulation in California hangs in the balance as Governor Gavin Newsom contemplates whether to sign SB 1047 into law. The bill, introduced by state senator Scott Wiener, aims to prevent AI disasters by holding AI model developers liable for any catastrophic events caused by their technology. It would also grant California’s attorney general the power to sue AI companies for hefty penalties. While the bill has its supporters, including Elon Musk and Microsoft’s former chief AI officer Sophia Velastegui, it has faced significant opposition from the tech industry and AI researchers who fear it could stifle innovation. Newsom must carefully weigh these concerns against the need for AI regulation and the potential risks associated with unchecked AI development.

Why Newsom might sign it

Senator Wiener argues that Silicon Valley needs more liability, and Governor Newsom may be motivated to hold Big Tech accountable for the potential harms caused by AI. Elon Musk, a well-known critic of AI, has expressed cautious optimism about SB 1047, recognizing the need for regulation in the industry. Former Microsoft AI officer Sophia Velastegui also believes the bill is a good compromise, suggesting the establishment of an office of responsible AI to ensure accountability. The startup Anthropic, while not taking an official stance on the bill, acknowledges that its amendments have improved it and believes the benefits of the bill outweigh its costs.

Why Newsom might veto it

The tech industry has vehemently opposed SB 1047, with concerns that it would set a precedent and shift liability from applications to infrastructure. Andreessen Horowitz general partner Martin Casado argues that this paradigm shift could have a chilling effect on California’s AI innovation. The U.S. Chamber of Commerce and other trade groups have also urged Newsom to veto the bill, citing AI’s foundational role in America’s economic growth. Newsom may be wary of jeopardizing the booming AI industry’s contribution to the economy and could choose to delay regulation or leave it to Congress.

If SB 1047 becomes law

If Governor Newsom signs SB 1047 into law, the bill’s provisions will gradually come into effect. By January 1, 2025, tech companies will be required to write safety reports for their AI models. California’s attorney general will have the power to request an injunctive order, halting the training or operation of AI models deemed dangerous. In 2026, a Board of Frontier Models will be created to collect safety reports and make recommendations to the attorney general. AI model developers will need to hire auditors to assess their safety practices, creating a new industry for AI safety compliance. The attorney general will also be able to sue developers if their tools are used in catastrophic events. By 2027, the Board of Frontier Models could begin issuing guidance on safe AI model training and operation.

If SB 1047 gets vetoed

Should Governor Newsom choose to veto SB 1047, federal regulators may take the lead on regulating AI models. OpenAI and Anthropic have already laid the groundwork for federal AI regulation by granting early access to their advanced AI models to the AI Safety Institute, a proposed federal body. OpenAI has also endorsed a bill that would allow the AI Safety Institute to set standards for AI models at the national level. While federal regulation may take longer to develop, it is seen as a less burdensome approach compared to California’s regulations. Additionally, maintaining a strong partnership between Silicon Valley and the government has historically been beneficial for both parties.