Advertising

The Deepfake Problem: Who Is Responsible and How Can It Be Solved?

The issue of deepfakes, highly-realistic synthetic forgeries, has become a significant concern in the age of technological innovation. These deepfakes, often referred to as deepfake porn, are created using generative AI tools and have been widely distributed, including through popular websites and social media platforms. They can involve manipulated sexual images of public figures, spreading as trending content. This problem has not only impacted the online sphere but also the real lives of users, including young people. The responsibility for this issue is a matter of debate, with disagreement on who should be held accountable for nonconsensual synthetic forgeries.

The Cyber Civil Rights Initiative, an advocacy and research organization, defines sexually explicit digital forgeries as manipulated photos or videos that falsely depict a person nude or engaged in sexual conduct. While these forgeries can be created using various tools, generative AI tools are commonly associated with deepfakes. The term “deepfake” encompasses a wide range of manipulated visual or auditory likenesses, which can be sexually explicit or not. They can be consensual or used for harmful purposes, regulated during or after their creation, and potentially outlawed or subject to criminal and civil liabilities depending on their intent.

Companies have differing views on the issue of deepfakes. Anthropic, an AI developer, takes a safety-first approach and prohibits the generation of sexually explicit content using its tools. Apple has reinforced anti-pornography policies in its advertising and app store rules. GitHub, a platform for developers, considers the creation and promotion of non-consensual explicit imagery as a violation of its policies. Google has implemented policies to curb the access and dissemination of nonconsensual synthetic content in its search results and advertising. YouTube, as a content host, moderates explicit content and allows for the removal of content that features synthetic versions of a person without their consent. Microsoft has strict policies against sharing or creating sexually intimate images without permission. OpenAI, a prominent AI development company, has banned AI-generated pornography and deepfakes, although they have explored the possibility of allowing NSFW outputs in age-appropriate content. Meta, the parent company of Facebook, has faced scrutiny for failing to curb explicit synthetic forgeries. Instagram, Snapchat, TikTok, and Twitter also have policies in place to address explicit content and synthetic imagery.

The issue of deepfakes is challenging to solve due to the difficulty of pinpointing deepfakes and the lack of consensus on responsibility. Legislative efforts are being made by government leaders, but the complexity of deepfakes makes it difficult to combat them through corporate policies alone. Companies in the tech and social sectors are balancing their responsibility to users with the need for innovation. As the use of generative AI tools continues to expand, the roles of these companies in the creation and distribution of synthetic content are becoming increasingly blurred. Continued efforts and collaboration are needed to address the challenges posed by deepfakes and protect users from the harmful consequences of nonconsensual synthetic forgeries.