Advertising

California’s Controversial AI Bill: Safeguarding Against Future Disasters or Stifling Innovation?

California’s SB 1047 bill aims to prevent AI systems from causing catastrophic harm and outlines regulations and liabilities for companies developing large AI models. The bill defines “critical harms” as the use of AI models to create weapons causing mass casualties or orchestrate cyberattacks resulting in significant financial damages. The rules set out by SB 1047 would apply to the world’s largest AI models, which cost at least $100 million and use a significant amount of computing power during training. It also requires companies to implement safety protocols and testing procedures to prevent misuses of AI models. The bill would be enforced by a new California agency, the Board of Frontier Models, which would certify and oversee compliance. Proponents of the bill argue that it is necessary to prevent harm before it occurs and to learn from past policy failures. They include California State Senator Scott Wiener, AI researchers Geoffrey Hinton and Yoshua Bengio, and the Center for AI Safety. However, opponents, including venture capitalists, big tech trade groups, and influential AI academics, argue that the bill’s thresholds and regulations would burden startups and stifle innovation. They claim that the bill would harm the AI ecosystem, particularly open source models, and that it should be regulated at the federal level. The bill is currently awaiting approval or veto from California Governor Gavin Newsom.