← Back to Blog

A British widow lost half a million pounds. An LA woman lost her life savings. A Florida couple lost 45,000 dollars. The common thread in all of these cases? Someone stole a face they recognized and used it to take everything.

None of the real people involved gave permission. Most did not even know it was happening until it was too late.

And the faces being stolen are not always famous ones.

Anyone Can Become a Target

There is a version of this story that feels distant because it mostly involves celebrities. Scarlett Johansson. Elon Musk. Jason Momoa. Tom Hanks. Each of them has had to deal with AI-generated versions of themselves promoting things they never endorsed, saying things they never said, appearing in content they never approved.

Tom Hanks has publicly warned people multiple times about fake ads that use his AI-generated likeness to sell miracle cures and wonder drugs. He has no control over those ads. He has no easy legal recourse. He can just warn people.

But here is what the headlines miss: the tools that created those celebrity fakes are available to anyone. The person with a modest following, a popular online course, a regional business, a professional reputation built over years. Any of them can become a target. And the smaller you are, the less infrastructure you have to fight back. The attack surface is expanding too: even iPhones saw a 741% surge in deepfake injection attacks in 2025.

How Fast This Actually Happens

The speed is what makes deepfake fraud so disorienting. According to Cybernews, which analyzed the AI Incident Database for 2025, 179 out of 346 recorded AI incidents that year involved deepfakes, including voice, video, or image impersonation. And that is only what was recorded and reported.

Once a fake spreads, containment is nearly impossible. According to research from identity.com, AI-generated videos and audio clips often travel across platforms faster than official responses can keep up. The damage to reputation can be long-lasting even after content is proven false.

In early 2025, celebrities were targeted 47 times by AI-generated impersonations, an 81 percent increase compared to all of 2024. That number is going in one direction.

The Voice Problem Is Even Scarier Than the Face Problem

Most people think about deepfakes in terms of video, a fake face doing something. But voice cloning may actually be the more immediate threat for most people.

Research from Keepnet Labs shows that scammers now need as little as three seconds of audio to create a voice clone with an 85 percent voice match to the original speaker. Three seconds. That is genuinely terrifying when you think about how much audio exists of most professionals online. Podcast appearances. YouTube videos. Webinars. Every second of it is potentially training data.

A finance worker at the architecture firm Arup was tricked into wiring 25 million dollars in February 2024 because of a deepfake video conference call. He thought he was talking to real colleagues and the real CFO. He was not.

The Emotional Cost Nobody Measures

Beyond the financial losses, there is something harder to quantify happening when someone's digital identity gets hijacked. People describe it as a violation. The sense that something deeply personal and carefully built has been weaponized against them or against others in their name.

I think about what it would feel like to have years of trust-building undone overnight because someone used a version of your face to scam your audience. To have people who follow you, who trust you, become victims of something you never did. That is not just a financial problem. That is a profound personal one.

Protection Has to Be Proactive

The consistent theme across every story like this is that by the time people knew something was wrong, the damage was already done. Experts confirm it: preventing AI misuse requires measures that operate before unauthorized content circulates widely. Once a deepfake or voice clone spreads, the damage is often difficult to reverse.

That is the entire premise behind the InsureMyAvatar community. We are gathering the people who understand this problem before the industry has built the solution. Because the conversation your business or your personal brand needs to be having is not what do we do if this happens. It is what are we doing now to make sure we are not caught flat-footed when it does.