Platform
All You Need To Know About Snapchat’s New Generative AI Watermarking
Snapchat has unveiled new transparency measures for AI-generated content. The popular messaging app has announced it will soon watermark images created or edited using its generative AI tools, such as Dreams and AI Snaps.
According to the company, a small ghost logo will appear beside Snapchat’s widely recognized “sparkle” icon when users export or save an AI-generated image to their camera roll. This watermark informs viewers that the image stemmed from Snapchat’s AI capabilities rather than depicting reality.
“We believe Snapchatters should be informed about the types of technologies they’re using, whether they’re creating fun visuals or learning through text-based conversations with My AI,” Snap states.
The company says it has taken a thoughtful approach to deploying generative AI responsibly and transparently. It already uses contextual icons and labels to denote when a Snapchat feature utilizes AI under the hood. Additionally, the app vets all political ads through rigorous human review to detect misuse of AI for spreading misinformation.
Beyond watermarking, Snapchat employs multiple safeguards around its generative AI products:
- Red-Teaming: Snapchat has partnered with security firm HackerOne to rigorously test its AI models and features, spending over 2,500 hours identifying and resolving potential safety risks.
- Safety Filtering: All prompts for AI Lenses undergo a review process to detect and remove problematic language before launching to users.
- Inclusive Testing: To minimize biased outputs, Snapchat implements testing across diverse demographics to ensure fair access.
While committed to generative AI’s potential for enriching self-expression and learning, Snapchat acknowledges that the technology is quickly evolving and has room for mistakes. “Mistakes may still occur. Snapchatters are able to report content, and we appreciate this feedback,” the company states.
Snapchat provides guidelines that encourage the creative, responsible use of generative AI tools that are aligned with its terms of service and community guidelines. This includes not sharing private information or assuming AI outputs depict truth or reality.