In its new report on managing AI risks in the financial sector, the U.S. Treasury Department issued a dire warning regarding the cybersecurity of banks and other institutions as they face the onslaught of deepfake-related fraud. The aptly titled report, “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector,” highlights the finance industry’s vulnerabilities when it comes to social engineering — particularly the utilization of deepfakes and LLMs for targeted phishing and business email compromise, and synthetic identity fraud. The Department notes how nefarious actors create convincing voice, video, image, and document deepfakes to misrepresent themselves as customers or management to penetrate security systems or KYC measures and hijack data, open fake accounts, and steal money via fraudulent transactions.
Based on this report alone, it is easy to conclude that financial institutions have run out of time to prepare for the rapid evolution of AI-fueled cybercrime.
At Reality Defender, we continue to study the new ways in which fraudsters utilize voice cloning and other deepfake methods to overcome banks’ security measures. Last year, in the U.S. alone, voice deepfakes and other synthetic identity attacks comprised 33% of fraud events reported by businesses. Such schemes may cost U.S. taxpayers $1 trillion over the course of 2024, with projected losses reaching billions for enterprises facing similar schemes during the same timeframe. Fraudsters exploit human inability to detect deepfakes and use social engineering techniques to overwhelm call centers of the world's largest banks with synthesized speech, aiming to trick operators into granting access to data or initiating transactions. Some of our most valued clients have been dealing with these deepfake assaults on their infrastructure for years.
An Unreasonable Ask
While studying the risks of its newest Voice Engine technology, OpenAI has recently advised financial institutions to phase out “voice based authentication as a security measure for accessing bank accounts and other sensitive information,” all to prevent the technology’s deep potential for misuse by cybercriminals. Unfortunately, given that companies have spent billions of dollars on voice authentication tools as part of their basic customer verification pipeline, this advice isn’t realistic or likely to be implemented. In addition, this suggestion does not address the issue of deepfake phone calls.
One of our clients, a multinational tier-one bank, provides financial services to millions of customers in tens of billions of transactions per day. The bank’s agents routinely encounter thousands of fraudulent claims over the phone, and this number is rising. Given the importance of human-to-human interaction in building relationships with customers, service phone calls are unlikely to be phased out.
In this reality, we continue to work with this client, and many others, to implement practical deepfake detection solutions that leverage AI to catch voice clones in real time, allowing our clients to flag deepfake phone calls before they can do real damage. The implementation of our tools has been highly successful because our turnkey API is designed to be platform-agnostic, easily applied to any established workflow regardless of the particular systems our clients use.
The dangers of deepfake fraud do not need to translate into massive losses for the economy, customers, and companies — nor do they require the discontinuation of expensive security technologies. While the growing burden on companies to protect their infrastructure from AI-fueled attacks is heavy, the good news is that we can leverage the evolution of AI to stop such attacks, preparing our financial system for the formidable challenges of the new century.