Platform
TikTok Just Flipped the Script On Deepfakes, Starts Auto-Labeling AI Content
TikTok announced it is beginning to automatically label AI-generated content (AIGC) when uploaded from certain other platforms. The social video app has partnered with the Coalition for Content Provenance and Authenticity (C2PA) to implement their Content Credentials technology, making TikTok the first video-sharing platform to utilize this AIGC labeling solution.
Content Credentials attaches metadata to digital content, allowing platforms like TikTok to recognize and label AIGC instantly. This capability is rolling out initially for images and videos, with audio-only content support coming soon.
Over the coming months, TikTok states it will also start attaching Content Credentials to content created on its own platform. This persistent metadata allows anyone to verify if a piece of content is AI-generated using C2PA’s tools, including details on when, where, and how it was made or edited.
To drive broader industry adoption of Content Credentials, TikTok has joined the Adobe-led Content Authenticity Initiative (CAI). “With TikTok’s vast community globally, we are thrilled to welcome them to C2PA and CAI as they provide more transparency and authenticity on the platform,” Dana Rao, General Counsel and Chief Trust Officer at Adobe, said in the announcement.
TikTok recognizes that while labeling supports responsible AIGC usage, labels alone may confuse viewers without proper context. To address this, the platform has joined forces with MediaWise, a Poynter Institute program. Together, they will release 12 videos throughout 2024, enlightening users on media literacy skills and how AIGC labels provide additional content context.
MediaWise Director Alex Mahadevan comments in the announcement, “Five years after launching our Teen Fact-Checking Network on TikTok, we’re thrilled to empower even more people to separate fact from fiction online.”
TikTok’s policies prohibit harmfully misleading AIGC. The platform claims it continues investing in detection models, expert consultations, and multi-stakeholder partnerships to combat deceptive AI use.