Our latest National SIRA research shows synthetic identity fraud has entered a new phase of machine-driven growth that will challenge even the most sophisticated fraud controls.
Not just higher-quality fake identities, but systems that can create, submit, test and recycle entirely fabricated IDs at a pace no human network could sustain.
There are moments in fraud where something fundamentally shifts. Not because criminals invent something from nothing, but because they adapt to the world we build - and find the weak points with alarming effectiveness.
Synthetic identity fraud is one of those shifts, best understood as a symptom of a deeper change.
Digital-first, app-only customer journeys widened the gap between claiming an identity and proving one. Fraudsters moved quickly to exploit that space, using synthetic profiles to pass verification where speed is not always supported by adequate checks.
Then the AI boom poured fuel on the issue.
At inception, most synthetic IDs are too thin to obtain high-risk financial products. That’s where the real criminal gains lie and always have done.
Historically, criminals bridged this gap with blunt, risky methods. One was the so-called “Day of the Jackal” approach: hijacking the identity of a deceased child. Mercifully, these cases are rare. They’re simply too complex to execute, expensive to maintain, and costly when the identity is eventually burned.
Another established tactic – adopted from recreational sectors - is “ramp-on” activity: using lower-scrutiny products to build the necessary financial footprint before moving into higher-value targets, like credit cards.
This kind of staged progression is possible because UK identity verification remains heavily reliant on triangulating partial data sources - a model long recognised as effective for inclusion, but vulnerable when manipulated at scale*.
At Synectics, we’re seeing synthetic identities become a routine part of tactic-stacking across sectors. Several creation methods are now on particularly worrying trajectories, and we believe will place substantial pressure on financial services in the coming year. These include:
The latter is moving fastest. Evidence increasingly points to AI-enabled synthetic “factories” that generate, test and refine false personas at machine speed, often without direct human involvement.
These synthetic or artificial IDs then appear in ramp-on attempts. We’re currently tracking an MO in which synthetic IDs are used in home insurance applications via aggregator sites to build history before moving into higher-value products.
If these layered synthetic identities succeed, and begin to look legitimate, financial services faces a world in which the ability to reliably know your customer starts to break down. Trust fractures, and fraud exposure multiplies.
A significant part of that success is AI-enabled distribution: bots and agents that can submit, test and recycle applications at a pace no human could ever match.
To move this fast and at such volume, it’s our belief that a worrying portion of the synthetic identity fraud landscape is highly operationalised.
It’s also clear that – armed with personas with little to no links to a real person - organised gangs and hyper-sophisticated opportunists are probing controls, adapting to plausibility bands, and quickly trying again if they fail.
Clean synthetic identities, capable of passing checks, can and are being created. With that comes an opportunity to commit fraud that our data shows criminals rarely pass up: repeated attempts, across multiple organisations, using variants of the same underlying identity.
Recent analysis of cross-organisational fraud patterns shows that:
Taken together, these figures point to a form of repeat offending that would be extremely difficult to sustain using traditional impersonation alone. Identities are being reused, adapted and redeployed at a speed and scale that implies automation.
Set against the rapid rise of AI-generated personas and industrialised identity creation, the connection is hard to ignore, and shows a threat far beyond “one-off” application fraud.
Fraud strategy needs to evolve in the face of synthetic identities. Onboarding checks remain critical, but they are no longer sufficient on their own. In practice, that means three things.
Synthetics thrive in the gaps between products and providers. Only shared intelligence and cross-sector collaboration are the keys to closing them.
* Consult Hyperion, “Data: The Key to Inclusive Digital Identity” (2022)