The next phase of synthetic identity fraud revealed: tactics and trajectory
Our latest National SIRA research shows synthetic identity fraud has entered a new phase of machine-driven growth that will challenge even the most sophisticated fraud controls.
Not just higher-quality fake identities, but systems that can create, submit, test and recycle entirely fabricated IDs at a pace no human network could sustain.
- The Evolution: How digital-first journeys and autonomous AI turned synthetic identity fraud into a machine-scale threat
- The Anatomy: How are these identities built? Building and maturing synthetic personas through public-register abuse, reuse, and AI generation
- Industrialised Fraud: A look inside the “synthetic factory" and how automation used not just to create, but to hit organisations in mass fraudulent application waves
- The New Signal: Exclusive Synectics intelligence reveals strong links between synthetic IDs and repeat, cross-finance repeat offending
- Shifting Control: Why closing the gap requires lifecycle context, real-time visibility, and shared intelligence across the entire financial ecosystem
From digital onboarding to machine-scale identity fraud
There are moments in fraud where something fundamentally shifts. Not because criminals invent something from nothing, but because they adapt to the world we build - and find the weak points with alarming effectiveness.
Synthetic identity fraud is one of those shifts, best understood as a symptom of a deeper change.
Digital-first, app-only customer journeys widened the gap between claiming an identity and proving one. Fraudsters moved quickly to exploit that space, using synthetic profiles to pass verification where speed is not always supported by adequate checks.
Then the AI boom poured fuel on the issue.
- It made identity fabrication easier, cheaper and more convincing - from shallowfake documents to deepfake-assisted biometrics.
- As a result, what is emerging now looks less like traditional “synthetics” and more like fully “artificial identities”, with a compounding growth trajectory.
- But the growth figures - as high as 152% year-on-year in some financial products - are only one part of the story.
How 2026’s artificial identities are built and made credible
At inception, most synthetic IDs are too thin to obtain high-risk financial products. That’s where the real criminal gains lie and always have done.
Historically, criminals bridged this gap with blunt, risky methods. One was the so-called “Day of the Jackal” approach: hijacking the identity of a deceased child. Mercifully, these cases are rare. They’re simply too complex to execute, expensive to maintain, and costly when the identity is eventually burned.
Another established tactic – adopted from recreational sectors - is “ramp-on” activity: using lower-scrutiny products to build the necessary financial footprint before moving into higher-value targets, like credit cards.
This kind of staged progression is possible because UK identity verification remains heavily reliant on triangulating partial data sources - a model long recognised as effective for inclusion, but vulnerable when manipulated at scale*.
Inside the synthetic identity production model
At Synectics, we’re seeing synthetic identities become a routine part of tactic-stacking across sectors. Several creation methods are now on particularly worrying trajectories, and we believe will place substantial pressure on financial services in the coming year. These include:
- Public-register hijacking (such as an MO we’ve flagged where data is lifted from Companies House)
- The reuse of legitimate personal data from historic breaches
- And the rapid emergence of fully AI-generated identities
The latter is moving fastest. Evidence increasingly points to AI-enabled synthetic “factories” that generate, test and refine false personas at machine speed, often without direct human involvement.
These synthetic or artificial IDs then appear in ramp-on attempts. We’re currently tracking an MO in which synthetic IDs are used in home insurance applications via aggregator sites to build history before moving into higher-value products.
If these layered synthetic identities succeed, and begin to look legitimate, financial services faces a world in which the ability to reliably know your customer starts to break down. Trust fractures, and fraud exposure multiplies.
Why automated, repeatable schemes are the new identity fraud norm
A significant part of that success is AI-enabled distribution: bots and agents that can submit, test and recycle applications at a pace no human could ever match.
To move this fast and at such volume, it’s our belief that a worrying portion of the synthetic identity fraud landscape is highly operationalised.
It’s also clear that – armed with personas with little to no links to a real person - organised gangs and hyper-sophisticated opportunists are probing controls, adapting to plausibility bands, and quickly trying again if they fail.
Clean synthetic identities, capable of passing checks, can and are being created. With that comes an opportunity to commit fraud that our data shows criminals rarely pass up: repeated attempts, across multiple organisations, using variants of the same underlying identity.
Recent analysis of cross-organisational fraud patterns shows that:
- In 2025, 35% of individuals linked to fraud in National SIRA appear more than once, up from 29% in 2024.
- Repeat offenders are also hitting 50% more organisations than they were five years ago.
- Across financial institutions, 50% of false-ID reports link to individuals who have submitted at least one other false identity.
Taken together, these figures point to a form of repeat offending that would be extremely difficult to sustain using traditional impersonation alone. Identities are being reused, adapted and redeployed at a speed and scale that implies automation.
Set against the rapid rise of AI-generated personas and industrialised identity creation, the connection is hard to ignore, and shows a threat far beyond “one-off” application fraud.
The only effective response is context – in every direction.
Fraud strategy needs to evolve in the face of synthetic identities. Onboarding checks remain critical, but they are no longer sufficient on their own. In practice, that means three things.
- Stronger context at the point of application.
Organisations need visibility beyond their own data, including the ability to screen against populations of known and previously burned synthetic identities still circulating in the market. - Context across the customer lifecycle.
Dynamic identity verification should track how an identity behaves over time, where else it appears, and whether other organisations are seeing similar signals. - Real -time visibility inside the customer book.
Confirmed or suspected synthetic identities cannot be left to surface months later. They need to be flagged the moment risk materialises, through continuous monitoring and alerting tools such as
Synthetics thrive in the gaps between products and providers. Only shared intelligence and cross-sector collaboration are the keys to closing them.
* Consult Hyperion, “Data: The Key to Inclusive Digital Identity” (2022)