As AI-fueled attacks on governments, businesses, individuals, and digital platforms continue to escalate, it has become clear that deepfakes pose a long-term threat — both material, in the form of fraud and disinformation, and existential, endangering our very sense of shared reality. The increasing saturation of online spaces with AI-generated content hasn’t gone unnoticed, as 81% of Americans fear that misinformation from deepfakes and voice clones is negatively affecting the integrity of their elections, and 70% of industry-leaders believe that AI-generated fraud attacks will significantly impact their organizations.
This means that in a few years, deepfakes have gone from an obscure term, associated with a reprehensible Reddit forum distributing deepfake adult materials, to being acknowledged by the public and private sectors as one of the biggest emerging digital threats of the current era. The more generative AI permeates our culture — ”generative AI” saw a spike of almost 700% in Google searches from 2022 to 2023, and we have no reason to believe the number is any smaller this year — the more aware users will become of its nefarious side, and the tools that can create convincing deepfakes, whether malicious, harmless, or even helpful.
In short, from the evidence at hand, while the true scope and impact of deepfakes on our world is yet to be seen, the phenomenon will continue to grow at an alarming pace. We can be certain of one thing: deepfakes are becoming more convincing and sophisticated by the day. With each update, the generative AI tools that create synthetic content are becoming more adept, gobbling up more datasets to learn how to present the most effective forgeries. There is no doubt in my mind that a day will come when these tools will be capable of creating deepfakes that will trick 100% of the humans that encounter them, 100% of the time, no matter our expertise or amount of preparation.
Deepfakes Are Made to Trick the Human Senses
Current exposure to deepfakes is widespread. A new study found that 80% of respondents have encountered deepfake images, 64% have seen deepfake videos, and 48% have heard deepfake audio. Social media platforms were the primary source of these encounters, highlighting the pervasive nature of deepfakes in digital spaces. The study also found that 71% of respondents feel negatively about deepfakes, associating them primarily with fraudulent activities and disinformation.
At the same time, another study revealed that only 9% of people over the age of 16 are confident in their ability to identify a deepfake. While children aged 8-15 were more confident at 20%, the number is still quite small, and we have yet to see a study that convincingly shows that the ability to recognize deepfakes varies across age groups. For comparison, the Trend study found that 1 in 5 respondents (or 20%) believe they “just know” when they see a deepfake. Such confidence in being able to spot a deepfake — when studies have overwhelmingly found our ability to do so is extremely limited —is particularly worrying, and underscores Reality Defender’s approach to deepfakes: the burden of identifying them should never fall on the individual user.
Detection, Labeling, and Moderation Keep Deepfakes at Bay
In the end, the awareness of what deepfakes are and how they function should be a part of anyone’s media literacy, but the proliferation of sophisticated AI-generated content will require more than educational efforts. A multi-pronged approach is required to ensure that deepfakes don’t swallow up our sense of reality and make us suspect every single piece of media or text we experience online. A combination of provenance methods — which infuse AI-generated content with watermarks and metadata to track their synthetic nature — and inference methods will allow us to identify deepfakes reliably and in real time.
These inference methods include Reality Defender’s cutting-edge deepfake detection models, which don’t rely on objective truth but seek markers of AI manipulation directly within the media. In concert with effective detection, the adoption of clear labels marking deepfake content on online platforms, strong content moderation and removal policies, and laws that deter the malicious use of deepfakes will ensure we can mitigate this threat and focus on the more positive, promising aspects of the AI revolution.