Ethical AI in fraud detection: the good, the fad and avoiding the ugly

Artificial Intelligence (AI) has been identified by the UK Government as one of five technologies critical to the UK’s ambition of becoming a science and technology superpower by 2030.

 

Global interest in the field supports this bold stance. In Spring 2023, worldwide searches on Google for the term “artificial intelligence” hit an all-time high, analytics revealing that people are around four times more likely to explore this particular technology than they were just a few months ago.

 

 

This is no ‘fad’ when it comes to detecting fraud

 

Spikes of this nature are often dismissed - an indicator of fad status. And there is undoubtedly some truth in this. AI is not a ‘solve all’. But to down-play its potential in specific sectors, and as an empowering and enabling technology, is equally misguided. This is reinforced because the tech already has proven capabilities.

 

When it comes to helping financial service providers find, predict, and prevent fraud and financial crime, for example, AI is already yielding impressive results.

 

In fact, for the last few years we’ve been using predictive analytics - a form of AI…to deliver Precision. This tool blends data matching with syndicated data intelligence, link analysis, and a combination of supervised and unsupervised machine-learning techniques. It helps to understand ‘normal’ customer behaviours, and risk-score potential new customers against this to generate insights and alerts that help ‘prevent fraud at the gate’.

 

 

The critical companion to AI – ethical use of data

 

AI is incredibly powerful technology, and therefore must be used responsibly. At Synectics Solutions we process vast volumes of personal data on behalf of our customers and as such take this responsibility very seriously. This is data that can and should be used to de-risk decisions that promote financial inclusion and customer service excellence.

 

We are continuously certified to ISO 27001, certified to Cyber Essentials, and are a designated Specified Anti-Fraud Organisation under the Serious Crime Act 2007. Our in-house data science team operate within these standards and accreditations.

 

When it comes to ethics specifically, we agree with Michelle Donelan MP, Secretary of State for Science, Innovation and Technology, that the development and deployment of AI can “present ethical challenges which do not always have clear answers”. And that “a framework to ensure risks are identified and addressed” is needed in the form of regulation that doesn’t styme innovation.

 

Indeed, we closely follow the work of the Centre for Data Ethics and Innovation – the UK Government body tasked with developing a governance regime for data driven technologies.

 

But we are also not prepared to wait for that framework to ensure that AI – and the data that feeds it – is used ethically in the field of financial service fraud detection and prevention. There are things we can, and are doing, today.

 

With that in mind, here are just some of the measures we employ so that our customers can feel confident that their use of AI is for good.

 

The five pillars of ethical AI deployment. For our customers and theirs

 

1) Breadth of data: critical context comes from broader sources.

 

When it comes to AI, limiting data sources can be detrimental to outcomes. The more information a system has to learn from, the more accurate it is likely to be.

 

The source data used to train our Precision AI models comes predominantly from our SIRA product – the UK’s largest database of cross sector customer risk intelligence, containing well over 300 million records from 160+ contributing members - combined with available data sets from each specific customer’s client base.

 

Using syndicated data as our foundation delivers a level of diversity that supports accurate modelling and avoids potential unfair bias pitfalls that can occur with models built using more restricted portfolios. This is especially useful for scenarios where customer demographics may be changing - for instance if a bank is launching a new product to a new target group.

 

There are also third-party data sources that we can integrate, depending on a customer’s specific needs, including socio-demographic data, device intelligence and other pertinent industry sources.

 

 

2) Relevance of data: pre-implementation checks.

 

We may not be responsible for the data banks, building societies, insurers and other businesses hold on their customers. But we do have a responsibility to ensure that the data we process on behalf of our clients is being used in an appropriate and legal way for the purpose that it was originally collected.

 

This means that while breadth of source material is vital, every Precision implementation for a client is done manually by a team of engineers and data scientists - led by a dedicated ‘model owner’ with overall responsibility for the final solution - who are able to spot elements of the data, or inferences made by algorithms, which may unjustly influence the final outcome of an investigation on an application.

 

We do this by creating an initial list of features that will be fed into the modelling process by looking at all the available customer data and performing various preparation tasks. We then assess the usefulness of each of the features in the model - what we refer to as validating enquiry level feature importance - to check that no individual feature is unethically or incorrectly influencing the model, thereby removing bias. A minimum of around 100 features (hundreds in non-real time use cases) is always in place to ensure no one feature dominates.

 

Crucially, this process can also highlight that certain features have no effect on the overall score. If this is the case, these can be left out of the feature set. The final set of features, which we always ensure is also validated by a member of our team who has not been involved with the model build, should be balanced, and have a cumulative effect on the score which does not favour any individual or group of features in an extreme way.

 

 

3) Recalibration of data: to remove ‘outdated’ bias.

 

The risk of bias doesn’t disappear with these initial checks. Data submitted by applicants changes over time. For example, salaries change, people move house, etc. But more importantly the way certain types of fraud are carried out change and this is reflected in the characteristics of the data submitted.

 

We therefore carry out at least weekly checks on the performance of the machine learning models we deploy, to identify trends and anomalies that might indicate a problem with applying that model (as it stands) to the data being submitted.

 

Approximately every six months, each model is recalibrated which involves feeding more recent data into the modelling process to allow it to re-learn the characteristics based on the more recent data. This means that the same application seen by the old and the new model would likely return different scores, simply because the characteristics that were of interest to the original model may no longer be of interest to the later one based on what it has learned about applicants’ behaviour from the more recent data.

 

 

4) Validation of data: retaining the ‘human factor’.

 

Humans are fallible, especially when they don’t have all the facts. AI counters this fallibility by enabling financial service providers to ‘to take much more into account’ when assessing risk. The volume of data that can be mined and used contextually to reach a decision is far greater than could ever be achieved manually.

 

That said, human checks and balances have an important role to play in keeping processes ethical and compliant. And this doesn’t simply refer to the manual verifications we run while testing and implementing models. The ‘human factor’ is also crucial to the decision-making process.

 

For instance, Precision’s predictive analytics and AI capabilities identify patterns and behaviours that indicate risk, allocating risk scores (ranging from 1-1000) to the incoming data. These scores allow users to group risk profiles and apply their own knowledge to make quick, informed decisions about ‘best next action’ i.e., whether further investigation is necessary, what specific policy/service pricing to give based on risk indicated etc.

 

In this sense, AI is not in any way replacing human expertise, it is enabling it. But one final pillar of ethical protection is important at this stage. Transparency.

 

 

5) Transparency of data: know the reasons why.

 

Operators receiving an AI generated risk score may still have the final say in what happens next – but how do they know that the risk score provided is accurate or deserved?

 

The answer is transparency. As well as receiving a full list of features that we have decided to use/have validated as part of the modelling algorithm in place (as outlined earlier), we also ‘show our workings’ for how a specific risk score decision has been reached.

 

By clicking on the score, users can see the top six reasons for that score in relation to a given enquiry. This provides additional rationale and allows clients to better understand the true risk of each enquiry rather than needing to base this on generalised trends. Reasons are further used within feedback and monitoring processes to highlight features that might be overpowered or biased alongside the actual data values potentially afflicted. Similarly, we also highlight the top 6 features in terms of impact on the model as a whole.

 

Breaking down each specific calculation involved in producing a score would negate the time saving benefits of employing AI models in the first place. But by flagging the vital factors that have played a role in reaching a specific outcome, we help demystify and continually improve the process as time progresses.

 

 

More to come...

 

There is a lot of work to do in terms of AI governance. And we will most certainly continue to adapt our solutions to help customers meet any regulatory frameworks that may be implemented. But for now, we continue to focus on ensuring that we employ robust, best-practice processes to keep the businesses ahead of the curve and confident in the technology they use.

Time to connect