Industry Insights

Feb 8, 2024

How the 23AndMe Hack Mirrors the Deepfake Problem

Person walking forward

The massive data breach at the DNA genetic testing company 23andMe has led to theft of ancestry data of nearly seven million users, or half of the company’s customer base. Though reports suggest genetic data was not among the stolen property, the incident has raised alarm over the possibility of cybercriminals stealing DNA information from ancestry-testing companies en masse.

Theft and sale of genetic data are some of the most shocking violations of individual privacy imaginable. Such data could be used for discrimination, surveillance, misuse in law enforcement, fraud, unauthorized genetic manipulation, targeted marketing, and countless other methods of unethical monetization. An individual’s genetic sequencing is truly the most fundamentally sensitive piece of data, and pilfering of this sensitive information for nefarious purposes is nearly unthinkable. It is also not something that can be changed significantly, forever remaining wholly unique to each individual.

DNA isn’t the only identifying feature (or features) specific to each individual. Biometric and likeness data utilizes the unique features of a person’s face, voice, and other unique characteristics to create one-of-one digital fingerprints. Accelerated by the COVID-19 pandemic, the integration of contactless biometric data is projected to become an $82 billion industry by 2030. The process of authentication, access, and payment across the world’s commonplace industries—banking, travel/ID verification, e-commerce—is increasingly driven by rapid advances in facial and voice recognition.

The proliferation of biometrics used to verify identity also creates opportunities for exploitation by cybercriminals leveraging the growing capabilities of generative AI and deepfakes. With the mass of sensitive biometric data voluntarily shared by users of social media sites and other platforms—and the growth of biometric databases that will become targets for future hacking and theft—malicious actors can now hijack photos, videos, and audio recordings of individuals, and use them to create convincing deepfakes that mimic users’ unique biometric signature. Malicious actors can then plant this deepfake likeness to trick facial detection tools. By cloning the individual’s voice, fraudsters can similarly bypass the aforementioned verification methods, enabling account breaches, data and asset seizures, and identity theft — just to name a few of many types of fraud.

Not only is this hijacking of biometric data a serious violation of individual privacy, the practical implications of how this data can be abused are even more dire as generative AI’s abilities advance. As with DNA, this data is unique to each individual and all but impossible to change, leaving individuals forever vulnerable after falling prey to such attacks. To put it simply: once it’s out there, it’s out there for good.

How Deepfake Detection Can Help

To protect security infrastructures from such attacks — and to disincentivize cybercriminals and hackers from stealing biometric data in the first place — companies and organizations must strive to include deepfake detection in their workflows. Reality Defender’s multi-modal and multi-model approach to deepfake detection provides essential real-time scanning tools that can be easily integrated into any platform already using biometrics (but currently vulnerable to misused and faked biometrics). Our best-in-class detection API analyzes images, audio, and video content effortlessly, concurrently, and at scale, all while providing detailed reporting and actionable results.

Our detection capabilities employ the latest generative AI models, as well as theoretical models that model for future techniques that could be adopted by creators of malicious deepfakes down the line. By marrying present and future technologies, Reality Defender offers clients continuous security built by the world’s top machine learning experts, and assists our clients in their efforts to protect customers, data, and assets from the threat of biometric breaches. With cutting-edge deepfake detection tools, there is no need to lose trust in biometric security technologies that are essential in every facet of modern life. With deepfake detection, some things can still remain private.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter