Advertising

Google Play’s New Guidelines Crack Down on Generative AI Apps Producing Graphic and Offensive Content

Google has recently implemented stricter guidelines for generative AI apps on its Google Play store. These guidelines aim to prevent the proliferation of apps offering offensive and potentially harmful content, such as deepfake “undressing” apps and those creating graphic material. The updated policy requires developers to incorporate measures against prohibited content, including content that exploits or abuses children, deceives users, or enables dishonest behaviors.

Under the new guidelines, developers of generative AI apps must also provide in-app flagging and reporting mechanisms for users to report inappropriate content. Additionally, they are required to thoroughly test their AI models. These rules apply to apps that generate AI-generated content using text, voice, and image prompt inputs, including chatbots, image generators, and audio-spoofing apps.

It is important to note that these guidelines do not apply to apps that only host AI content or those with AI productivity tools like summarizing features. The focus is specifically on apps that generate potentially offensive or harmful content.

Google’s decision to implement these guidelines comes after the company devalued AI-generated pornography in its search rankings and banned advertising for websites that create or endorse deepfake pornography. This action was taken in response to the increasing prevalence of nonconsensual deepfake pornography, particularly involving celebrities. Websites featuring nonconsensual deepfakes often faced complaints from victims who sought protection under the Digital Media Copyright Act (DMCA).

The issue of deepfakes and the nonconsensual use of people’s likenesses has been a growing concern within the AI industry. OpenAI and Google DeepMind employees recently penned an open letter highlighting the risks of manipulation and misinformation associated with AI advancements. This letter emphasized the need for regulation in order to mitigate these risks.

Google’s app store regulations align with a White House AI directive issued to tech companies, urging them to take stronger actions against deepfakes. Specifically, Google was called upon to address apps that contribute to image-based sexual abuse. This move by Google demonstrates its commitment to curbing the spread of deepfakes and protecting users from potential harm.

For individuals who have been victims of deepfake abuse, there are steps that can be taken to seek support and protection. It is important to be aware of the resources available and to report any instances of nonconsensual deepfake content.