Home Tech Snapchat Introduces Watermarking Technology to Combat AI Misinformation and Deepfakes

Snapchat Introduces Watermarking Technology to Combat AI Misinformation and Deepfakes

Snapchat has joined other major tech companies in using watermarking technology to combat AI misinformation and deepfakes. The company will now stamp a transparent watermark, featuring Snapchat’s ghost logo, on AI-generated images created using its AI tools such as the extend tool and the Dreams feature. This watermark will appear once the image is exported or downloaded from the app. Additionally, users who receive AI-generated images may also see the ghost logo and the app’s “sparkle” AI icon.

To further inform users about AI-generated content, Snapchat already marks such content in various ways. For example, images created using the Dreams feature are accompanied by a “context card” that explains the feature and generative AI. Conversations with Snapchat’s My AI chatbot and the extend tool utilize “contextual” icons like the sparkle symbol. The platform also takes extra precautions to vet political ads and other content for misleading use of AI-generated material.

Snapchat’s watermarks serve as a transparency tool to inform viewers that an image was made using AI on the platform. However, AI watchdogs caution that watermarking technology is not a foolproof solution. Other tech giants have also implemented similar measures. OpenAI, for instance, announced in February that it would add metadata watermarks to images generated by its DALL-E 3 system. Google introduced SynthID, a tool that adds invisible watermarks to AI images, in August. YouTube has taken steps to penalize users who fail to use its labelling system for digitally altered content.

While these watermarking technologies contribute to increasing transparency, they are not without limitations. Deepfakes and AI misinformation continue to pose challenges, and there is a need for ongoing efforts to combat their spread. Snapchat acknowledges this by committing to ongoing AI literacy efforts. The company has a generative AI FAQ available on its Support Site and encourages Snapchatters to report any content that may be incorrect, harmful, or misleading.

In conclusion, Snapchat’s adoption of watermarking technology is part of a broader industry trend aimed at addressing AI misinformation and deepfakes. While the use of watermarks is a step in the right direction, it is clear that more comprehensive measures are needed to combat the challenges posed by AI-generated content. Continued efforts in AI literacy and user reporting will play crucial roles in ensuring the platform remains a safe space for users.

Exit mobile version