Newsletter

Jun 12, 2023

Welcome to the Deepfake Election

Person walking forward

This post was featured in the Reality Defender Newsletter. To receive news, updates, and more on deepfakes and Generative AI in your inbox, subscribe to the Reality Defender Newsletter today.

Deepfake of the Week

The 2024 election is over a year away, yet the use of deepfakes and AI-generated content in political attack ads is only increasing. 

Last week, Republican primary candidate Ron DeSantis’ social media team released an attack ad targeting Former President Donald Trump and featuring his AI-generated embrace of former NIAID Director and Chief Medical Advisor to the President, Dr. Anthony Fauci.

What did Reality Defender’s Platform Say?

What did the Reality Defender team say?

Reality Defender Co-Founder and CEO Ben Colman spoke with Newsweek on the DeSantis deepfake and the increase of deepfakes in political attack ads in June alone. 

As we inch closer to the primaries and then to the general election, the usage of deepfakes in these ads will only increase, creating more confusion and spreading misinformation to the millions who view it. Social media platforms that host this content are not equipped with proactive deepfake detection to catch them before they’re viewed, nor do they have any concrete policies on the distribution of this content to their viewers.

The only way to stop this before it grows to become an even greater problem is with bipartisan legislative action mandating proactive deepfake detection and concrete anti-disinformation policies.

Read More

Meta Goes AI

Meta aims to unleash AI-powered chatbots and text-to-content creation tools across their platforms. This would give one of the largest user bases (aside from Google) free access to AI-generated content tools and subsequent sharing capabilities.

The company will continue to focus on its social media and metaverse products as well, but with no proactive deepfake and generative content detection used on Meta platforms for moderation, time will tell if these new tools are used for their intended purposes (creating new stickers, modifying family photos) or just become another avenue for spreading disinformation.

OpenAI Floats U.S. Collaboration with China

OpenAI CEO Sam Altman, during a conference in Beijing, emphasized the need for American and Chinese researchers to collaborate to mitigate the risks associated with AI, despite escalating competition between the two nations. While China (once seen as the leader in AI) makes significant strides in AI, it is still reliant on U.S. innovation and lags behind in breakthroughs. Altman indicated that OpenAI could open-source more models in the future to foster research, while maintaining a balance to prevent misuse of the technology.

The FBI Issues PSA on Deepfake Pornography

The FBI issued a public service announcement to raise awareness about the escalating problem of deepfakes used to create explicit content for “sextortion” and harassment purposes. You can read the PSA in full here, along with the FBI’s recommendations on how to combat, protect against, and monitor for this fast-growing form of deepfake-led abuse. 

Deepfake News:

  • A deepfake of Putin declaring martial law also made the rounds this week, confusing Russian citizens as it allegedly made its way to the television and radio networks in the country. (MSNBC)
  • Teen Vogue looks at how victims of deepfake pornography are advocating for federal legislation to protect them, as current laws lag behind. (Teen Vogue)
  • Deezer, the streaming audio platform that once created an AI-generated method of splitting songs into respective “stems” (vocal, guitar, bass, and drum files), is now detecting and deleting AI songs from their platform. (The Line of Best Fit)
  • Adobe is so confident in their Firefly AI (which is not trained on copyrighted materials) that they will legally compensate businesses for damages incurred if they are sued for copyright infringement stemming from a Firefly-created image. (Fast Company)

AI Legislation News:

  • EU tech chief Margrethe Vestager expects a draft code of conduct on artificial intelligence to be drawn up within weeks, providing industry guidelines for safeguarding AI use while new laws are developed. (Reuters)
  • The British Labour party is considering restricting the development of AI technologies in the UK to a license system. (The Guardian)
  • China is addressing AI head-on by setting issue-specific regulations. Some are in line with proposed regulations in Western nations, while others state results from AI must “reflect the core values of socialism.” (MIT Technology Review)

More AI News:

  • Charlie Brooker, creator and writer of the Netflix series Black Mirror, revealed he had ChatGPT write an episode of the show and found it to be “[expletive].” (Variety)
  • Radio host Mark Walters filed a defamation lawsuit against OpenAI, alleging ChatGPT falsely implicated him in a summarized legal case which allegedly harmed his reputation. (The Fader)
  • CNET is rethinking how it uses AI and updating past stories written using LLMs. (The Verge)
  • Marc Andreessen of Andreessen Horowitz has a new article out about how “AI will save the world.” (a16z)
  • Wired has an in-depth analysis of Andreessen’s article and what it gets wrong about the recent spate of developments in AI. (Wired)
  • WordPress, which powers around 40% of all websites, now has AI capabilities that will help users write blog posts. (Engadget)
  • Cohere, the LLM used by Jasper, Spotify, and other enterprises, raised a $270 million Series C round. (VentureBeat)
  • As the Writers Guild of America continues to strike and speak out against potential use of AI in the writers room, HBO head Casey Bloys sees no place for AI in the creative process. (Variety)
  • A research paper (yet to be peer reviewed) takes a look at the environmental toll of the AI boom. (The Guardian)
  • The New Yorker has a pretty hilarious parody of the deluge of recent letters warning against the dangers of unchecked AI growth. (The New Yorker)
  • Correction: Last week’s newsletter erroneously indicated a Texas judge banned ChatGPT from a courtroom. The judge will allow for ChatGPT to be used in the courtroom, but only if the output was first reviewed by a human.

Thank you for reading the Reality Defender Newsletter. If you have any questions about Reality Defender, or if you would like to see anything in future issues, please reach out to us here.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter