Deepfake Drama: The small screen plot that poses a big financial risk

What have popular UK soap opera ‘Coronation Street’ and the financial services sector got in common? Until recently, not much. That was until the former introduced a plot line based on a rather disturbing trend.

A key character was left mortified when a video of ‘her’ appeared online, seemingly participating in rather compromising activity. Only it wasn’t her. It was a deepfake. A video created using artificial intelligence (AI) to simulate a person’s likeness so accurately that it appears real.

Sadly, deepfakes are not confined to the realm of TV and movie fiction. They have become financial fraud ‘fact’.

Faking it with frequency


As perfectly explained by Nina Schick, author of ‘Deepfakes and the Infocalypse’, we are in a world of real trouble thanks to modern fakery.


With increasing frequency, AI and machine learning tools are applied to datasets of images, audio and/or videos in order to depict individuals saying or doing things they haven’t. More specifically in a financial services setting, requesting and/or authorising things they haven’t.

In 2020, for instance, a voice replication deepfake – used as part of a wider scam – was so convincing that it led to a bank manager in Hong Kong releasing approximately $35 million to fraudsters, believing that he was dealing with a business director he’d spoken with many times before.

A clearly concerning example. Even more worrying however, is the fact that this particular problem is not restricted to ‘grand scams’ such as this.

Indeed, with digital authentication becoming mainstream practice for financial services (in line with heightened customer expectations for seamless online service provision), the bigger issue is more mundane in nature. It’s the use of deepfake tech as part of common, everyday transactions.

Supporting ‘everyday’ scams


Deepfakes can ‘raise the dead’ – giving credence to scammers using the details of deceased individuals to make insurance claims or access accounts, a practice more commonly referred to as ‘ghost fraud’.


They can be used to convince bank employees checking against passport ID that they are dealing with the right person and not, in fact, an ID thief looking to open an account to support criminal activity or to run up debt with no intent of payback.

They can also be used to supercharge what has already become a major issue for the industry – synthetic ID fraud. This is when fraudsters combine a mix of fake, real and/or stolen ID elements to create a person that doesn’t actually exist, usually in order to take out credit cards/carry out transactions that build credit scores to make access to future financial services more likely.

Synthetic ID fraud is already one of the toughest financial crimes to detect, costing online lenders in the region of $6 billion a year. Adding what would appear to be credible imagery to that mix via the use of deepfake technology, is only going to make things more challenging. Given that the internet is awash with free tutorials on how to achieve this, it’s little wonder that the industry is concerned.

Fraud prevention – there’s safety in numbers


So, the big question. What can banks, insurers and finance providers do to protect themselves against the threat of deepfake fraud?


The answer is certainly not to do nothing; to avoid the risk of cybercrime by putting digital transformation plans on hold and waiting for the perfect answer to emerge. In an age of ‘digital-first’ customers, any institution adopting this stance will be placing themselves at an immediate competitive disadvantage.

A far more productive approach is to employ a multi-layered defence strategy.

Why? Because the enemy of deepfake is data. Lots of it. The types of fraud mentioned earlier are much more likely to prevail when verification routes are siloed and stunted i.e. when only a small number of corroborating sources are sought and cross-referenced (to verify that an ID is real and has existed over time).

By contrast, even the best deepfake will quickly be unpicked when a wider range of authoritative data sources are consulted. It’s simply too costly and difficult for fraudsters to apply stolen ID or synthetic profiles across a variety of reference points from numerous sources.

Indeed, the financial benefits achieved by deepfakes decrease significantly when greater levels of time, effort and sophistication are required to perpetrate the crime. Use of multiple authoritative data sources to verify ID is therefore likely to also serve as a deterrent.

Fight fire with fire


It’s also important to remember that fraudsters aren’t the only ones with access to AI. There’s a strong case here for fighting fire with fire. We are, in effect, in an arms race – AI for good vs AI for bad.

For instance, many banks are already introducing AI-based software to detect deepfake videos and images - making biometric verification a part of onboarding processes/service access etc.

Sticking with the idea of ‘safety in numbers’, organisations can also leverage AI and predictive analytics solutions – solutions trained on ‘normal’ customer activity and syndicated counter fraud data – to detect potentially suspicious account activity and to support ongoing due diligence checks.

Talk to the experts


There’s no right or wrong solution, particularly given that there is still much to learn and understand about the threats deepfakes pose. It will be interesting to see where global legislation on the use of deepfake technology goes.


What we can do is leave the drama to the soap operas, and instead focus on practical tools available in the here and now that can help build the type of multi-level defence needed to guard against this issue.

And that’s something we can help with. At Synectics Solutions, we might not be experts in deepfake tech, but we are experts in making sure our customers can leverage data-driven insight to identify and tackle emerging threats. Talk to us today about our range of solutions for fraud detection and ID verification.

Talk to the team

Time to connect