Home Government & Policy California’s Bill to Prevent AI Disasters Faces Opposition, Receives Amendments

California’s Bill to Prevent AI Disasters Faces Opposition, Receives Amendments

California’s bill to prevent AI disasters, SB 1047, has recently undergone significant changes after facing opposition from various parties in Silicon Valley. These amendments were proposed by AI firm Anthropic and other opponents and were incorporated into the bill after it passed through California’s Appropriations Committee.

One of the key changes to SB 1047 is that it no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event occurs. Instead, the attorney general can seek injunctive relief and still has the option to sue if an AI model does cause a catastrophic event. This amendment addresses the concerns raised by Anthropic.

Additionally, the bill no longer includes the creation of the Frontier Model Division (FMD), a new government agency that was initially part of the legislation. However, the bill still establishes the Board of Frontier Models, which will be housed within the existing Government Operations Agency. The board has actually been expanded to include nine members instead of five. Its responsibilities include setting compute thresholds for covered models, issuing safety guidance, and implementing regulations for auditors.

Senator Wiener, who introduced the bill, also made changes regarding the certification of safety test results by AI labs. Instead of requiring certifications under penalty of perjury, the labs are now only required to submit public statements outlining their safety practices, without facing criminal liability.

Furthermore, the language in SB 1047 regarding the assurance of AI model safety has become more lenient. The bill now requires developers to exercise “reasonable care” in ensuring that AI models do not pose a significant risk of causing catastrophe, as opposed to the previous requirement of “reasonable assurance.”

The amendments also provide protection for open-source fine-tuned models. If someone spends less than $10 million on fine-tuning a covered model, they are explicitly excluded from being considered a developer under SB 1047. The responsibility for the model’s safety still lies with the original developer.

These changes were made in an effort to appease opponents of SB 1047, including U.S. congressmen, AI researchers, Big Tech companies, and venture capitalists. By incorporating these amendments, the bill is expected to be less controversial and more likely to gain support from the AI industry. While Governor Newsom has not publicly commented on SB 1047, he has previously expressed his commitment to California’s AI innovation.

However, these amendments are unlikely to satisfy critics who fundamentally disagree with the concept of holding developers liable for the dangers of their AI models. Despite the changes, SB 1047 remains a bill that places responsibility on developers in the event of AI disasters.

SB 1047 will now proceed to the California Assembly floor for a final vote. If it passes there, it will need to be referred back to the Senate for a vote due to the recent amendments. If it successfully passes both chambers, it will be presented to Governor Newsom, who will have the option to either veto or sign it into law.

Exit mobile version