How confident in your machine-learning decisions are you? (Part 3)

The use of predictive modelling and machine-learning in fraud prevention is not something new. Most financial services organisations and insurance providers use these machine-learning technologies every second of every day to automate the decision to accept, refer or decline new customers.

Machine-learning enables an organisation to make decisions quickly without requiring manual intervention whilst protecting them from evolving threats. Ironically, they do pose their own threat to an organisation as they are inherently susceptible to model degradation. This can lead to rising false positives / negatives if they are not regularly recalibrated or smartly used together with other technologies. We cover this risk and how to mitigate it in our blog.

 

The challenge of “explainability”

A recent report by the Economist Intelligence Unit (EIU) highlighted that banks feel AI is a top priority for technology investment and that executives believe that the correct application of AI will separate winning and losing banks. The stakes are clearly high. This same report also identified that data bias, “black box” risk and lack of human oversight are the main challenges around governance for banks using AI.

This theme of Governance in using AI is also becoming more widely discussed amongst regulators, and looks set to remain a hot topic. The need to be able to explain how and why a certain decision was made is critical.

 

Digital transformation must be ethical and inclusive

The global COVID-19 pandemic has accelerated digital transformation across most sectors. Organisations realise that they must provide an exceptional digital-first service in order to attract and retain customers.

This pace of change is not without challenges. As organisations race to complete their digital transformation initiatives, compromises are sometimes needed. It is important to ensure that these don’t impact the ethics behind the decisions being made. Organisations must still be confident in the accuracy of their automated decisions and feel able to justify these decisions to regulators.

 

Open, honest and transparent

Justifying to the regulators why a person is accepted, referred or declined for a financial product such as a mortgage can be incredibly difficult especially if the logic and reasoning is locked away behind a “black box” view of risk.

Accessibility isn’t the only risk. Bias and model degradation can also influence fair and accurate decision-making. Bias is known to creep into almost any machine-learning model and can lead to discrimination against certain individuals and marginalised or vulnerable groups.

Model degradation happens to all machine-learning models and it increases in impact as more time elapses. It occurs when the factors and characteristics that trained the original target model deviate from the current reality, due to a shift in behaviour or approach.

The result can be damaging to an organisation. No longer can you be confident in the automated decisions you are making. A minority of genuine customers will be denied access to financial services products, whilst fraudsters’ efforts to exploit fraud defences will be more consistently rewarded.

It will always be absolutely necessary to appraise the data and recalibrate your predictive models at regular intervals. This will ensure that you make fair and accurate decisions based on machine learning, and that you can confidently explain them to a regulator if required.

 

Sometimes more is more

There is no silver bullet. To the contrary, it is our view that a single solution in isolation will not provide the best fraud defences possible. A robust system needs to be multi-layered. It should take advantage of data-matching with syndicated data intelligence, link-analysis and a combination of supervised and unsupervised machine-learning technologies.

The fraudsters don’t stand still, they are constantly evolving their typologies. This is why you need many weapons in your arsenal to help identify the changing, emerging or anomalous behaviours indicative of fraud.

 

Have confidence in your machine learning decisions

Our latest predictive models utilise anomaly detection, an unsupervised machine-learning technique that will allow your defences to adapt to emerging and anomalous behaviour. This keeps you pro-actively protected against new and emerging frauds whilst ensuring you can be more confident that the decisions you are making are accurate and fair.

All of our predictive models augment the Precision score with additional transparency behind the rationale. As well as the Precision score itself, we include the reasons why this record has received that score for explainability and to provide a steer to investigators. Particularly useful when regulators take a far more stringent view on things.

 

We can help you create the best fraud detection and prevention solution. Please get in touch so we can discuss your requirements.

Time to connect