Advertising

Google’s New Guidelines for AI Apps on Google Play: Preventing Inappropriate Content and Protecting User Safety

Google is taking action to address the issue of inappropriate and prohibited content in AI apps distributed through Google Play. The company has released new guidelines for developers to ensure that their apps do not generate restricted content, such as sexual content and violence. Developers are also required to include a feature that allows users to flag offensive content they come across.

One particular area that Google is cracking down on is the marketing materials used by these apps. If an app’s advertising suggests that it is capable of undressing people or creating nonconsensual nude images, the app may be banned from Google Play, regardless of whether it can actually perform these actions or not.

This move by Google comes in response to a rise in AI undressing apps that have been promoted on social media platforms. These apps claim to use AI to generate deepfake nudes, and some even use popular figures like Kim Kardashian to market their services. Although Apple and Google have already removed these apps from their stores, the problem remains widespread.

The consequences of these apps go beyond just inappropriate content. Schools across the U.S. are reporting cases of students using AI deepfake nudes for bullying and harassment. In one extreme case, a racist AI deepfake of a school principal led to an arrest in Baltimore. The issue has even reached middle schools, further highlighting the urgent need for action.

Google’s new guidelines aim to prevent AI-generated content that can be harmful or inappropriate from appearing on Google Play. The company emphasizes the importance of its existing AI-Generated Content Policy, which outlines the requirements for app approval. AI apps must not generate any restricted content and must provide users with a way to flag offensive material. Developers are also responsible for monitoring and prioritizing user feedback on inappropriate content.

Furthermore, developers are prohibited from advertising that their app breaks any of Google Play’s rules. If an app promotes an inappropriate use case, it risks being removed from the app store.

To ensure the safety and integrity of their apps, developers are advised to protect against prompts that could manipulate their AI features to create offensive content. Google provides a closed testing feature that allows developers to share early versions of their apps with users for feedback. It is strongly recommended that developers not only test their apps before launch but also document these tests, as Google may request to review them in the future.

In addition to the guidelines, Google is offering resources and best practices such as the “People + AI Guidebook” to support developers in building AI apps. These initiatives demonstrate Google’s commitment to addressing the issue of inappropriate and harmful content in AI apps and promoting a safer and more responsible AI ecosystem.