The term “deepfake” was coined in 2017 by a Reddit moderator under the same moniker, who founded a subreddit for users to exchange deepfake pornography they had created using photos of celebrities and open source face-swapping technology. Although the unsavory forum has since been deleted, the word deepfake has persisted as the new label for a type of AI-generated media.
While the origin of the word is clear, the history of what we consider “deepfakes” is more complicated.
The Origin of Deepfakes
The concept of deepfakes (or deepfaking) can be traced back to efforts starting in the 1990s, when researchers used CGI in attempts to create realistic images of humans. The technology gained traction in the 2010s, when the availability of large datasets, developments in machine learning, and the power of new computing resources led to major advances in the field.
A true point of no return for deepfakes is attributed to the 2014 breakthrough in deep learning unveiled by Ian Goodfellow and his team. Goodfellow introduced the machine learning concept known as Generative Adversarial Network (GAN). Eventually, GAN would enable the next generation of highly sophisticated image, video, and audio deepfakes.
Continued Development
No history of deepfakes can be complete without acknowledging contributions made by the average internet user. The open-source deepfake creation tools have been tested and refined by legions of hobbyists, who have utilized these tools for purposes of benign entertainment (memes, swapping out actors’ faces in classic movies) and more sinister, appalling goals, like the creation of deepfake pornography. It was the participation and driving interest of everyday users, beginning in 2017, that has brought the technology to where it is now. This continued democratization of such powerful tools demonstrates the dire need for adequate countermeasures to detect and isolate deepfakes before they can be used for malicious purposes.
In 2018, experts began to express concern over the quick evolution of deepfake technology, and the implications of its rise and availability. Later that year, major tech platforms began to unroll policies meant to moderate the use of deepfakes on their platforms. (This was also the year that Reality Defender’s original non-profit entity began, later evolving into the deepfake detection company you see today.)
In 2019, several countries, including the United States, began to explore legislative measures to regulate the creation and distribution of deepfakes. Such developments continue to this day, albeit with mixed results.
The period of uncertainty and delayed response continues as the technology evolves. Companies, public institutions, and news and media platforms can stay ahead of the curve by integrating deepfake detection technology as an essential part of their existing security measures. Deepfakes will only increase in complexity and scope, as will the uses of them in fraud attempts and disinformation campaigns. This is why Reality Defender existed since 2018: to detect deepfakes, stop disinformation, and turn the clock back on the harms deepfakes have caused for years.