Industry Insights

Jul 15, 2024

Defense Against the Onslaught of Social Media Deepfakes

Person walking forward

In a year of crucial elections, global conflicts, and surging AI fraud, social media platforms carry the heavy responsibility of moderating endless streams of content while responding to unpredictable advances in AI technology.

In this climate, major platforms have agreed to a voluntary framework for responding to AI-generated deepfakes. This accord proposes a framework to attempt the detection of AI content distributed or created via their platforms. While the accord is not binding, we are encouraged to see that these companies are willing to share their approaches and commit to a united response in the face of deepfake onslaught.

Labeling Social Media Deepfakes

So far, most of these methods are focused on labeling. Meta has recently expanded its AI flagging policy to keep up with the evolution of generative AI tools and to ensure that all AI-manipulated material on their platforms carry a cautionary label to inform users whether they’re interacting with synthetic content. These labels are determined both by Meta’s own detection systems (the details of which have not been disclosed) and users’ voluntary disclosures. (Most recently, such an approach very publicly backfired.) 

TikTok continues to ban deepfakes of private persons from its platforms, and asks users to clearly label all AI-manipulated uploads with a “sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’.” Yet despite swift takedowns of flagged deepfakes by the company, this content (such as the trend of “resurrecting” deceased individuals, including children) is still spreading. 

Detecting Social Media Deepfakes

Reality Defender’s platforms and detection models focus on detecting subtle inconsistencies within the content itself that are indicative of manipulation or synthetic generation. This includes visual artifacts, unnatural voice patterns in audio recordings, and distortions in facial expressions. Our detection tools focus solely on the suspicious media in question and all the ways in which our models can capture irregularities indicative of AI-fueled deception, thus eliminating reliance on labels that can be manipulated.

As platforms band together to find the best way forward, we will continue to work closely with our partners to incorporate our award-winning turnkey detection models into moderation workflows built to flag AI-manipulated content at the highest levels of creation and dissemination. The burden on detection should never be on individual users scrutinizing materials on their own, or relying on labels that are susceptible to tampering. Companies need larger at-scale solutions that catch deepfakes before they can reach social media users at all.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter