Newsletter

May 9, 2023

Spot the Deepfake, The White House Takes on AI, and Google’s New Model

Person walking forward

This post was featured in the Reality Defender Newsletter. To receive news, updates, and more on deepfakes and Generative AI in your inbox, subscribe to the Reality Defender Newsletter today.

Deepfake of the Week

All three of these images were posted to Twitter this week. Each photo was suspected of being a deepfake by users. One was posted by Amnesty International, one by a historical photo account, and one is a video still from a random user.

Scroll down if you want the answer, but look at each photo and see if you can spot the deepfake without state-of-the-art deepfake detection technology.

What does Reality Defender's platform say?

They're all deepfakes. Pardon the trick question.

The first image was lifted from a now-deleted tweet by Amnesty International, who came under fire for using a deepfake to highlight the second anniversary of protest in Colombia. 

The second image of rude boys in Brixton circa 1969 was posted to a historical image account, where the account owner later claimed he had no idea such technology existed.

The third image is a still from a video of a deepfake purporting to be a girl who went missing in Poland over a decade ago, albeit in the present day. This has since proven to be tragically false.

What does this mean?

Twitter, as with the vast majority of social media platforms, puts the onus on users to detect deepfakes. Social media platforms ask users to flag, report, and/or note the potential use of generative AI in creating content with the potential to do great harm and spread misinformation. Users' tools are their own eyes and nothing more, and these platforms have invested nothing in deepfake detection to proactively remove this content before it's seen and trusted by millions. By the time people saw these photos and video, millions had already accepted their validity and moved on. The damage was already done.

TikTok is the latest platform to add user-generated flagging for deepfakes. Reality Defender Co-Founder and CEO Ben Colman recently wrote about this misguided approach, and why a user-guided solution is not much of a solution at all.

You can read Ben's post about user-detected deepfakes on TikTok here.

AI Goes to Washington

The Biden Administration took steps this week to begin to address the multitude of issues stemming from generative AI, starting with the announcement of a $140 million investment in addressing AI-related risks. Executives from OpenAI, Microsoft, and Alphabet also came to Washington to discuss developments in AI with President Biden. Finally, the White House announced their cooperation with a team of AI experts on "the largest red teaming exercise ever for any group of AI models" at this year's Def Con.

Google's New AI: Coming Tomorrow

Tomorrow is Google's I/O conference, where the company will allegedly announce its new LLM, as well as the implementation of already-announced AI tech across all Google products, per CNBC. We'll do a deep dive on the Google announcements and what they mean in terms of generative content detection on Reality Defender in next week's edition.

How a Tier One Social Media Company Rooted Out Fake Users With Reality Defender

Social Media Case Study

In an attempt to root out fake and “bot” users, a tier-one social media platform partnered with Reality Defender to analyze profile images and see how far bad actors and scammers went in their attempts to defraud and deceive millions of users.

Download Case Study

Deepfake News

  • Spotify cracked down on deepfaked songs posted to their platform, all from a single source. (The Fader)
  • A deepfake generator is changing the race of Asian subjects input into its models. (Vice)
  • You can now buy deepfakes through Tencent for $145. (The Register)
  • Rep. Clarke of NY has introduced a bill requiring political ads to indicate when they've used generated content. (The Washington Post)
  • Grimes'-voice-as-a-service is now available. (Grimes AI-1 Voiceprint)
  • An Israeli music company has deepfaked deceased artists for a new song. (Variety)
  • Katerina Cizek and shirin anlen write in an op-ed on how deepfake labeling could be a slippery slope. (Wired)
  • Sharing deepfaked pornography could soon be illegal in the U.S. (ABC News)

AI News

  • Even Snoop Dogg is weighing the risks of AI. (Ars Technica)
  • Google is using AI to build new hearing aids. (Google)
  • AI should not be feared, says noted computer scientist Jürgen Schmidhuber. (The Guardian)
  • ...but tech billionaires will benefit from AI and not humanity, says noted activist and writer Naomi Klein. (The Guardian)
  • The WGA is on strike, and one of their battles is against the use of AI in the script writing process. (Engadget)

Note: If you are a WGA member or know a member of the WGA, please get in touch with us here. We would like to talk to you about something that may be of assistance during the ongoing strike (and afterwards).

Thank you for reading the Reality Defender Newsletter. If you have any questions about Reality Defender, or if you would like to see anything in future issues, please reach out to us here.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter