Industry Insights

Mar 12, 2024

What the Royal Family Photo Says About the Status of Trust in Media

Person walking forward

Over the weekend, while myself and the Reality Defender team were in Austin for South by Southwest, a new photo of Catherine, Princess of Wales appeared on Instagram. As the first photo of Catherine in quite some time (months after she took an absence from the public for a surgery), there was much to be said on and offline about this seemingly innocent image of the princess and her children together, all smiling for the camera.

Within seconds of being posted, rumormongers and online commenters immediately pounced on the photo, with many claiming it to be a deepfake. After the photo was posted and circulated in publications, it was then retracted due to "manipulations," adding further fuel to claims of the photo being deepfaked.

As the head of a platform that does not delve into why specific media is manipulated — only if it is likely real or not — as well as an American who could not be more removed from Royal gossip, I'm not in a position to examine the intent of those claiming the photo was deepfaked. (Based on our team's own analysis, it is a real photo that was clearly in-painted, or "Photoshopped," and rather poorly.)

Yet the photo absolutely says everything about the current state of trust on and offline.

A World of Questioning Everything

As more people are aware of deepfakes, but seemingly not of their current capabilities (or lack thereof), it's only natural that the validity of a photo from a heavily scrutinized notable name would be questioned by even the most media literate among us. Seeing as moderation and flagging of deepfaked and/or manipulated materials more or less does not exist at a tangible level on major platforms, there's really no source to flock to for a quick and certain answer. (That is, unless, you or your entity have implemented robust deepfake detection.)

We received significantly higher than normal inquiries on this photo within the first hour of its appearance, including from experts who themselves were wholly uncertain. As it is a bad Photoshop job, it is nothing new from the countless laughable airbrushings and edits that've existed for decades before we became a company. Yet with the advent of deepfakes and convincingly real generative AI, everyone is now infinitely more distrusting of the media they consume and its origin — experts included.

This is precisely not the world we want to keep heading towards. We built Reality Defender because we want people to be able to trust what they see and not have to question whether something is real (or, in this case, real with a bit of a touch-up) or wholly fabricated. We decidedly do not offer consumer solutions because we do not want AI-generated manipulations to be another worry people have to constantly consider. Instead, we work with the largest governments, institutions, enterprises, and, yes, some platforms to provide deepfake detection at the highest points of entry, with the greatest potential coverage for the widest amount of users and consumers.

If content platforms are not required to act or largely avoid detecting deepfakes and generative AI, we will continue to live in this world, where even a famous family photo is immediately called into question as wholly generated and fake. We feel that progress is being made both by legislators and these platforms, ever so slightly, but we have a ways to go, and hopefully incidents like this one will move the needle in requiring and implementing the detection of deepfakes. To not do so would move us to a world where everyone overanalyzes everything they see, everywhere and always — the total absence of trust.

We trust in ourselves and our peers to make it so we never get there.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter