Several times a day globally, a superior requests one of their team members to complete a task while on a video call. But is the individual delegating duties truly who they claim to be? Or could it be a deepfake? Instead of obediently following commands, workers must now question if they are falling prey to deception.

At a prior time this year, a finance employee found themselves engaged in a video meeting with someone who bore a striking resemblance to their CFO. Post-meeting, they dutifully executed their supervisor’s orders to transfer $200 million Hong Kong dollars, amounting to $25 million.

However, it transpired that it wasn’t actually their boss — but rather an AI-generated video simulation known as a deepfake. Later that day, the staff member ackowledged their grave blunder after consulting with the head offices of their multinational corporation. They had fallen victim to a deepfake scam that duped the company out of $25 million.

Enterprises are frequently targeted by deepfakes

The term deepfake alludes to AI-produced content — video, picture, audio, or text — that presents false or modified information, like Taylor Swift endorsing cookware and the well-known bogus Tom Cruise. Even the recent hurricanes in the U.S. resulted in numerous deepfake visuals, including fabricated flooded Disney World images and heartbreaking AI-generated depictions of individuals with their pets in floodwaters.

Even though deepfakes, also termed synthetic media, aiming at individuals typically aim to manipulate people, cybercriminals targeting corporations are seeking financial benefits. As per the CISA Contextualizing Deepfake Threats to Organizations information sheet, threats directed at businesses tend to fall into one of three categories: executive impersonation for brand manipulation, impersonation for financial gain, or impersonation for gaining access.

However, the recent event in Hong Kong wasn’t merely a single employee’s oversight. Deepfake deceptions are increasingly common in the corporate world. A recent Medus survey disclosed that a majority (53%) of financial professionals have encountered attempted deepfake scams. What’s even more alarming is that over 43% confessed to ultimately being duped by such attacks.

View Unmask the Deepfake

Are deepfake assaults being underreported?

The critical term from the Medus research is “admitted.” This raises a significant query. Are individuals failing to disclose being victims of deepfake attacks due to embarrassment? The likelihood is evident. Post-event, it becomes clear to others that it was a falsification. It’s challenging to admit falling for an AI-generated visual. However, the underreporting exacerbates the embarrassment and facilitates cybercriminals in evading capture.

Many victims disregard their reservations, uncertainties, and questions. Why do people, even those educated about deepfakes, cast aside their doubts and opt to believe an image is authentic? That’s the million-dollar — or rather, $25 million — question that we must answer to avert costly and detrimental deepfake frauds in the future.

Sage Journals probed into who was more prone to falling for deepfakes and found no consistent pattern based on age or gender. Nevertheless, older individuals may be more susceptible to such schemes and have difficulty discerning them. Moreover, the researchers observed that while awareness is a positive kick-off, it seems to have limited effectiveness in preventing individuals from being deceived by deepfakes.

Nonetheless, computational neuroscientist Tijl Grootswagers of Western Sydney University seems to have pinpointed the challenge in identifying a deepfake: it’s an entirely new skill for each of us. While we’ve learned to doubt news stories and bias, questioning the legitimacy of a visible image contradicts our cognitive processes. Grootswagers informed Science Magazine, “In our lives, we never have to think about who is a real or a fake person. It’s not a task we’ve been trained on.”

Intriguingly, Grootswagers found that our brains excel in detection without direct intervention. When people examined a deepfake image, it triggered a distinct electrical discharge in the brain’s visual cortex compared to a genuine image or video. The reasons behind this remained unclear — possibly the signal failed to reach our consciousness due to interference from other brain areas, or perhaps humans are unaware of the indicators that an image is counterfeit due to its novelty as a task.

This implies that we each need to condition our brains to consider the possibility that any image or video we encounter might be a deepfake. By posing this question before reacting to any content, we may start recognizing the brain signals that identify the fakes beforehand. Most importantly, if we do fall victim to a deepfake, particularly at work, it is essential that we report all instances. Only then can experts and authorities make headway in curbing the production and dissemination.