He Looked Right at His Boss's Face. It Wasn't His Boss.

He Looked Right at His Boss's Face. It Wasn't His Boss.

The employee at Arup's Hong Kong office did everything right. When an email arrived claiming to be from the company's CFO in London, requesting a confidential money transfer, he got suspicious. Good instinct. He didn't just wire the money.

Instead, he joined a video call to verify.

The CFO was there. So were several of his colleagues. Everyone looked right. Everyone sounded right. He watched their faces. He heard their voices. His skepticism faded. He made 15 transfers to five different bank accounts in Hong Kong.

He sent $25.6 million to people he had never actually spoken to in his life.

Every single person on that call was a deepfake.

This Was Not a Glitch. It Was a Plan.

Arup is not some startup that cut corners on security. It's an 18,500-person British engineering firm that helped design the Sydney Opera House and the Bird's Nest Olympic stadium. It has offices in 34 countries. It has a global chief information officer who knew exactly what was happening in the cybersecurity world.

And still. $25.6 million, gone.

What made this attack different from anything that came before it is how it defeated the one defense we all instinctively reach for when something feels off: our own eyes. The finance worker's first instinct was to be suspicious of the email. That was exactly the right response. So the attackers didn't try to write a more convincing email. They just changed the game entirely. They put him in a room full of faces he recognized.

Rob Greig, Arup's Chief Information Officer, described it to the World Economic Forum this way: 'What happened at Arup, I would call it technology-enhanced social engineering. It wasn't even a cyberattack in the purest sense. None of our systems were compromised and there was no data affected. People were deceived into believing they were carrying out genuine transactions.'

No systems hacked. No passwords stolen. No code exploited. Just faces. And trust.

How They Pulled It Off

The attackers didn't need anything exotic to build the deepfakes. They had access to the same thing you have access to right now: publicly available video footage. Corporate announcements, recorded conference calls, LinkedIn videos, media interviews. The CFO had a public presence. So did the colleagues. Enough video and audio to train a model that could impersonate them in real time.

The scam reportedly started months before the call, as the attackers gathered footage and built their fakes. By the time the video conference happened, they had convincing, real-time digital duplicates of multiple people inside the organization.

The employee made his first transfer during the call. Then another. Then another. Fifteen in total before it was over. He only realized something was wrong later that day when he followed up with the actual UK headquarters, who had no idea what he was talking about.

As of early 2026, the investigation is still ongoing. No one has been charged. The money has not been recovered.

The Number That Should Keep You Up at Night

The Arup case was one of the first large-scale corporate deepfake frauds to go public. It will not be the last. Analysts at Deloitte projected that AI-enabled fraud losses in the United States alone could reach $40 billion by 2027. A significant portion of that is expected to come from deepfake-related schemes targeting individuals and businesses.

The technology is not standing still. The fakes that fooled a finance professional in 2024 are less convincing than what's available today. The cost to produce them is dropping. The tools are more accessible. The gap between what a professional fraudster can create and what an amateur can create is closing fast.

Arup's CIO said it plainly in his World Economic Forum interview: 'My understanding is that this happens more frequently than a lot of people realize.'

It does. And most of it never makes the news.

The Question Nobody Is Asking Yet

Here is the thing that gets me about the Arup case. The attack didn't require compromising any systems. There was no data breach. No ransomware. Nothing that traditional cybersecurity tools are designed to detect or prevent.

What it required was a convincing copy of a human being.

We have spent decades building systems to verify that we are who we say we are online. Passwords. Two-factor authentication. Biometrics. CAPTCHA. All of it is built on the assumption that the thing being verified is the real thing. That the face is the face. That the voice is the voice.

That assumption is now in question.

So what happens when your face, your voice, your professional presence, becomes something that can be replicated and weaponized without your knowledge? When someone can build a working digital version of you using nothing but your public content and use it to defraud the people who trust you most?

That's not a hypothetical. It's what happened to the CFO at Arup. He didn't do anything wrong. His identity was used as a weapon. He wasn't even in the room.

Why This Matters Beyond Corporate Finance

You might be thinking this is a big-company problem. That it takes a sophisticated criminal organization to pull off a $25 million deepfake fraud. That the average person isn't a target.

That's how these things always start. Phishing emails were once the exclusive tool of sophisticated hackers. Now your aunt gets them every week. Technology democratizes. What costs serious resources today costs almost nothing in three years.

The same trajectory is underway with deepfake identity fraud. The cases making headlines now involve corporations and celebrities because those targets generate the biggest returns. But the tools being refined on those targets will eventually be pointed at anyone with a digital presence, a professional reputation, or people in their lives who trust their face and their voice.

That's most of us.

What Arup Learned. What We All Need to Learn.

Arup's CIO didn't crawl under a rock after this happened. He went public. He talked to the World Economic Forum. He wanted the experience to serve as a warning.

One of the most interesting practical tips to come out of the incident is also one of the simplest. On a video call, if something feels off, ask the person to turn sideways. Real-time deepfakes struggle with profile angles. The rendering degrades when the face turns away from the camera. A scammer who can fool you head-on may not be able to fool you when you ask them to look left.

Other recommendations that have emerged from the security community: establish verbal code words with your team for verifying high-stakes decisions made over video. Never approve a financial transaction based solely on a video call. Call back on a number you already have, not one provided in the meeting. Add friction to the process on purpose.

But those are organizational fixes. They don't address the underlying problem, which is that your digital identity, your likeness, your voice, your professional persona, now exists in a form that others can copy and deploy.

That problem doesn't have a complete solution yet. But awareness is the first step. And the Arup case, as painful as it was, gave the world something valuable: proof that this is real, it's here, and it's happening to people and organizations that thought they were prepared.

A Final Thought

I've spent a lot of time thinking about deepfakes. I wrote a book about them. And the thing I keep coming back to is this: the technology itself is neutral. It has legitimate uses in film, in education, in accessibility tools for people who have lost their voice. The problem is not the tool. The problem is that we haven't built the guardrails yet.

We didn't build seatbelts before cars. We didn't regulate financial instruments before they crashed economies. We tend to respond to the damage after it's done. The Arup case is an early data point in what is going to be a much longer conversation about identity, trust, and what it means to be verifiably, provably you in a world where your likeness can be cloned.

That conversation is starting now. I'd rather you be part of it before you need to be.

Sources

CNN Business: Arup confirmed as victim of $25 million deepfake scam

World Economic Forum: Lessons learned from a $25M deepfake attack (Rob Greig, Arup CIO)

Fortune: A deepfake CFO tricked British design firm Arup in $25 million fraud