California Governor Gavin Newsom has recently signed some of the toughest laws in the United States to regulate the artificial intelligence (AI) sector. These laws aim to address the risks associated with AI, particularly in the areas of deepfakes and the use of AI-generated content in the media industry. The new regulations highlight California’s commitment to harnessing transformative technologies while also considering the potential risks they pose.
One of the key laws, AB 2655, requires large online platforms like Facebook and X to remove or label AI deepfakes related to elections. This measure aims to prevent the dissemination of misleading or manipulated content that could influence the outcome of elections. It also mandates the creation of channels for reporting such content. Candidates and elected officials can seek injunctive relief if a platform fails to comply with this law. This is a significant step in combating the spread of AI deepfakes and safeguarding the democratic process.
Another important law, 2355, focuses on AI-generated political advertisements. Under this regulation, there is a requirement for disclosing when political ads are generated using AI technology. This means that public figures like former President Donald Trump would no longer be able to post AI deepfakes of celebrities endorsing them without proper disclosure. The Federal Communications Commission (FCC) has also proposed a similar disclosure requirement at the national level, emphasizing the importance of transparency in political messaging.
The remaining two laws signed by Governor Newsom pertain to the media industry. AB 2602 mandates that studios obtain permission from actors before creating AI-generated replicas of their voices or likenesses. This protects the rights of actors and ensures that their identities are not exploited without consent. Meanwhile, AB 1836 prohibits studios from creating digital replicas of deceased performers without permission from their estates. This law addresses the ethical concerns surrounding the use of AI to recreate the likeness of deceased individuals.
These new regulations have been welcomed by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), the largest actors union in the United States. SAG-AFTRA has been advocating for stricter standards in the media industry to protect actors’ rights and prevent the unauthorized use of their images or voices through AI technology.
While Governor Newsom has taken significant steps to regulate the AI sector, there are still other AI-related bills under consideration. One particularly contentious bill, SB 1047, has been sent to Newsom’s desk for final approval. This bill has raised concerns among opponents that it could have a chilling effect on the open-source community. Newsom has expressed his concerns about the potential impact of this bill during a conversation with Salesforce CEO Mark Benioff. He has two weeks to decide whether to sign or veto the bill.
The signing of these laws by Governor Newsom reflects the growing recognition of the importance of AI regulation in California. As the home to many leading AI companies, the state is taking proactive measures to harness the benefits of AI while mitigating its risks. These laws serve as a model for other states and countries grappling with similar challenges associated with AI and its potential misuse.
In conclusion, California’s new laws regulating the AI sector demonstrate a commitment to strike a balance between technological advancements and ethical considerations. By addressing the risks associated with AI deepfakes and AI-generated content in the media industry, these laws aim to protect the integrity of elections, safeguard actors’ rights, and uphold ethical standards. As AI continues to evolve, it is crucial for policymakers to stay updated and adapt regulations to ensure responsible and accountable use of this powerful technology.