The proliferation of deepfake technology poses a severe threat to online truth and trust. As deepfake sophistication grows, manual human detection will become essentially impossible without advanced scanning software to analyze content at the point of creation and distribution. Yet while this relatively new technology continues to permeate throughout the internet, stories arise of incidents where both everyday humans and experts fall for manipulated content.
The question remains: how rare are these instances of humans falling for deepfakes and AI-generated content? And can it really happen to anyone?
Can Humans Really Be Tricked by Deepfakes?
Unfortunately, the answer is a straightforward yes. According to a study published in Scientific Reports in August of 2023, up to 50% of the study respondents were unable to distinguish between a deepfake video and real footage. Off-the-shelf software available to everyday users can be used to create sophisticated deepfakes that can be instantly distributed online before their veracity can be confirmed or contested. One of the reasons deepfakes can spread so quickly beyond the reach of their creators, bots, and trolls is that regular users believe in their authenticity and pass them forward.
A troubling phenomenon investigated by a study published by iScience, and seen again in similar studies, shows that consumers are overconfident in their ability to spot deepfakes—participants were tricked by the synthetic videos they were shown but remained confident in their ability to distinguish between the real and the fake. This overconfidence disappeared only when financial gain became an incentive for correct answers.
Another unsettling phenomenon detected by these studies is detection bias. The concept of a “liar's dividend” (Chesney and Citron, 2019) proposes that when people become more skeptical of media, they may doubt authentic content, too. The above study found that when participants were told deepfake videos were sure to appear in the sample series of videos they were to be shown, they were much more likely to label an authentic video as fake.
Generative AI technology will become even more sophisticated and successful in creating synthetic media indistinguishable from the authentic. At Reality Defender, we believe that awareness of deepfakes has become a crucial part of every person’s media literacy. But as the numerous instances of successful deepfake fraud and political disinformation have already shown, it is unreasonable for us to expect that humans will recognize manipulated content in the wild. Our ability to distinguish between truth and lies will depend on state-of-the-art deepfake detection tools used at the highest levels of content creation, distribution, and moderation.