Advertising

YouTube’s Policy Change Allows Users to Request Removal of AI-Generated Content

The Rise of AI-Generated Content and YouTube’s Response

As the use of artificial intelligence (AI) in generating content becomes more prevalent, companies like Meta and YouTube are grappling with the implications for their platforms. In June, YouTube quietly implemented a policy change that allows individuals to request the removal of AI-generated or synthetic content that simulates their face or voice. This move is an expansion of YouTube’s responsible AI agenda, which was first introduced in November.

Unlike deepfakes, which are typically removed for being misleading, YouTube wants individuals to request the removal of AI content directly as a privacy violation. According to YouTube’s updated Help documentation, first-party claims are required for content takedowns, with a few exceptions such as minors, individuals without computer access, or deceased individuals.

However, simply submitting a takedown request does not guarantee that the content will be removed. YouTube will make its own judgment based on various factors. These include whether the content is disclosed as synthetic or made with AI, whether it uniquely identifies a person, and whether it could be considered parody, satire, or of public interest. The platform also considers whether the AI content features public figures engaging in sensitive behavior or endorsing products or political candidates, which is particularly concerning during an election year.

YouTube provides a 48-hour window for the content uploader to respond to the complaint. If the content is removed within that time, the complaint is closed. Otherwise, YouTube initiates a review. It is important to note that removal entails completely removing the video from the site and potentially removing the individual’s name and personal information from the video’s title, description, and tags. While users can blur faces in their videos, making the video private does not comply with removal requests as it can be set back to public status at any time.

YouTube did not widely publicize this policy change, but it did introduce a tool in Creator Studio that allows creators to disclose when realistic-looking content is generated using altered or synthetic media, including generative AI. The platform also began testing a feature that allows users to add crowdsourced notes providing additional context on videos, such as whether they are parodies or if they are misleading in some way.

It is important to note that YouTube is not against the use of AI and has experimented with generative AI itself. However, the company has emphasized that labeling content as AI-generated does not automatically protect it from removal; it still needs to comply with YouTube’s Community Guidelines.

When it comes to privacy complaints regarding AI material, YouTube does not immediately penalize the original content creator. A company representative clarified on the YouTube Community site that privacy violations are separate from Community Guidelines strikes, and receiving a privacy complaint does not automatically result in a strike.

In summary, the rise of AI-generated content has prompted YouTube to implement a policy allowing individuals to request the removal of AI-simulated content. While YouTube encourages first-party claims for takedowns, it carefully considers various factors before making a judgment. The platform also provides a window for content uploaders to respond to complaints and emphasizes the importance of complying with privacy requests. YouTube remains supportive of AI but stresses the need to adhere to its Community Guidelines.