You Didn’t Get Hacked. You Got Fooled.

Most business owners think about cyber risk in a very specific way. Did someone break into our systems? Did we get hit with ransomware? Did something go down?

That’s been the model for years. Someone gets in, something breaks, and you deal with it.

But something has changed, and it’s easy to miss. In a lot of cases now, nobody is breaking in at all. Your systems are working exactly as designed, and your people are doing exactly what they’ve been trained to do. They’re just acting on instructions that aren’t real.

You Didn’t Get Hacked. You Got Fooled.

How Work Actually Gets Done

I was talking with a business owner recently, and he said something that stuck with me. “If my controller gets a call from me, they’re going to act on it. That’s their job.”

He’s right, and that’s how most businesses operate. You build a team, you build trust, and you expect people to move when something comes from leadership. If everything required constant verification, nothing would get done. Speed matters, and trust is what allows that speed to exist in the first place.

That model has worked for a long time, which is why this shift is so easy to underestimate.

What Changed

The part that changed isn’t your business. It’s the environment around it.

AI can now recreate a voice from a short audio clip and generate a video that looks and sounds like someone on your leadership team. It can do it quickly, and it can do it using content that already exists online. That means something your business has always relied on, recognition, is no longer a reliable control.

Now you have a new situation where someone on your team gets a call, a message, or joins a meeting, and everything about it feels right. The voice matches, the tone matches, and the context makes sense. There is nothing obvious that would cause them to stop and question it.

But it isn’t real.

A Real Example

Earlier this year, an employee joined what appeared to be a normal internal video call. The people on the screen looked like coworkers, and the person leading the meeting looked and sounded like the CFO. During the call, the “CFO” asked for funds to be moved as part of a confidential transaction.

Over time, about 25 million dollars was transferred.

Every person on that call was a deepfake.

That’s not a system failure. That’s a breakdown in how trust is being used inside the business.

Where This Shows Up

I’ve spent a long time working with companies in this space, and one thing hasn’t changed. Most mid-market businesses run on speed and trust. When something feels urgent and appears to come from the right person, your team responds the way you’ve trained them to respond.

They move.

That’s what makes your business effective. It’s also what creates exposure in this environment. Because now the question isn’t whether someone hacked your systems. The question is whether your team just followed instructions from someone who wasn’t real.

That’s a different kind of risk, and most organizations aren’t set up to deal with it yet.

Why This Is Different

Most companies already know their security isn’t perfect. Passwords get reused, people take shortcuts, and leadership is usually the worst offender, not the best. Not because they don’t care, but because they have more important things to focus on.

So what fills the gap is recognition. If something sounds right and looks right, people act.

That used to be good enough. It isn’t anymore.

AI didn’t break your firewall. It broke the assumption that recognition equals reality. Once that assumption goes, a lot of your everyday decision-making becomes exposed in ways that are hard to see until something goes wrong.

What Actually Helps

When companies try to respond to this, they usually go in the same direction. More tools, more software, more layers.

In my experience, that’s not where the real answer is.

The real question is much simpler. When something important is happening, can you confirm it’s actually the right person? Not just that the login worked or the email looks legitimate, but that the person behind the request is who you think it is.

That’s the gap these attacks are exploiting.

A Practical Shift I’ve Been Watching

I’ll share one thing I’ve been looking at, and I’ll be clear about this up front. I don’t have any financial interest in it. I’m not an investor, and I don’t get paid to talk about it.

What caught my attention is a different way of thinking about identity. Instead of focusing on logins and passwords, it focuses on validating the human before something important happens.

That’s what platforms like iVALT are trying to solve.

Before a high-risk action takes place, a payment, an approval, or access to something sensitive, the system verifies that it’s actually the right person. Not just that the credentials were entered correctly, but that the person behind the request matches who they’re supposed to be in that moment.

That closes the exact gap that deepfakes and impersonation are exploiting.

One example that stuck with me came from their CTO. He lives at elevation in the mountains and set his identity so that if an approval ever comes in outside a very specific time and location window, it’s treated as suspicious by default.

That’s not checking a password.

That’s validating a real human pattern.

It’s a very different way to think about control.

Where I’d Start

If I were in your seat, I wouldn’t try to solve this everywhere at once. I’d start with the areas where mistakes actually matter. Wire transfers, large payments, sensitive documents, and key hires are usually the right place to begin.

From there, it comes down to a simple question. Are we trusting what looks right, or do we actually know it’s right?

If the answer is that you’re trusting it, then you’ve got exposure. Not technical exposure, but business exposure.


The One Question That Matters

You don’t need to become an expert in deepfake technology to deal with this. You just need one question that cuts through everything when your team is talking about risk.

When it really matters, are we relying on what feels familiar, or do we have a way to prove the person on the other end?

Because those are not the same thing anymore.

And if your business is still running on what feels right, that’s not a control.

P.S. We’ve spent years trying to stop people from getting in. Now we have to start asking… who are we letting in?

Mike Fitzpatrick

Founder & CEO, NCX Group

www.ncxgroup.com

    Leave a Reply

    Your email address will not be published. Required fields are marked *