Industry Insights

Nov 20, 2023

AI Safety Is Not an Afterthought

Person walking forward

Friday’s news of changing leadership at OpenAI may be the most newsworthy development in AI this year — even when factoring in actual advancements in AI.

As the story unfolds with twists and turns every hour, and as the AI world speculates wildly, there are a few things we do know:

This event (and everything after) as well as Meta’s dissolution of their Responsible AI team (see below) more or less proves that those making generative AI tools should have nothing to do with leading the protection against it. If safety can be called into question, jettisoned from internal rosters entirely, or tossed aside in favor of unchecked model usage and company growth, then it is a company’s half-hearted attempt at best to implement that safety from the start.

Those serious about AI safety will treat it as non-negotiable and not an afterthought. At Reality Defender, AI safety is core to our very existence, not a hinderance or a nuisance. We ask those considering implementing and using AI models to take equal measures in protecting against them, ensuring there’s always a counterbalance protecting users and society as a whole from the unfathomable dangers advanced AI could bring.

We don’t know how this story will play out in the coming hours and days. I wrote this early Monday morning, and by the time you read it, it could already be horribly dated.

My only hope is that, in the end, safety is valued over all.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter