Does the argument about AI decision bias misplace blame?

It has been more than a year since AI’s commercial breakout, and its biases continue to surprise. Although businesses, legislators and consumers are increasingly united in their understanding of AI biases, many technologies remain far from foolproof.

This is clear from the occasional odd trend and cautious think-piece, which correctly warn that AI is just as capable of affirming biases as it is uncovering new information.

However, many organisations who have built AI into their risk evaluation and decision-making processes have found the opposite to be true. That, in fact, embracing AI has the potential to reduce bias in decisioning. But only when certain approaches are taken.

Here, we explain why AI-supported decisioning-making is something to embrace, and the essential features of ethical models.

 

  • Finance is always on the side of fairness [Skip ahead]
  • Is every fraud strategy at risk of bias? [Skip ahead]
  • How do fraud teams move forward with AI? [Skip ahead]
  • Where to get help with AI decisioning [Skip ahead]

 

Always on the side of fairness.

 Financial Services and Insurance providers are keenly focused on preventing bias.

 Access to their products and services can significantly affect a customer’s life. This is precisely why the FCA’s Consumer Duty requires firms to evidence the fair and equitable treatment of customers – from speed of decisioning to preferential products and deals.

 Today, most firms generally ensure fairness (and compliance) by checking against confirmed fraud and risk intelligence databases. Always at the point of application, but ideally at regular intervals via an on-booking screening programme.

 

Is every fraud strategy at risk of bias?

 Effective as “pre-AI” measures are, the potential for unintended bias is ever present. This is for three key reasons:

 

1. A snapshot may not be 100% accurate.

 

Due to economic change, “normal” consumer behaviour is evolving quickly. Confirmed fraud aside, this could mean applicants are excluded from preferential products because data snapshots are increasingly challenging to risk assess, and as a result, a consumer may exceed an organisation’s risk appetite.

In this context, an AI co-pilot – which continually learns about and adjusts the influence of risk markers – becomes a compelling option.

 

2. Even best intentions can be biased.

Presented with more grey areas (see above), Financial Services and Insurance organisations may undertake more investigations to ensure fair decisioning.

But the best informed and balanced investigator can experience bias. In these cases, bias is usually unconscious, but the outcome for a consumer is the same.

It is important to note that ineffective AI models can reflect the biases of their human designers. But, when ethically modelled and maintained, AI-powered predictive analytics can help overcome the decision biases inherent to human nature.

 
3. The difference between proactive vs predictive.
 

Consortium fraud checks will always be a critical layer in counter-fraud strategies. The intelligence within approved syndicates is undisputedly factual, and therefore imperative to fair decisioning.

That said, in isolation consortium data cannot accurately, or ethically, predict the future. As a result, some consumers may be treated unfairly. For example, being required to undertake additional verification steps for longer than is necessary, or losing access to new product offers.



Only AI-powered predictive analytics can deliver long-term risk insights. But, predictive modelling can only be trusted it if is fed with authoritative consortium data.

 

So, how do fraud teams move forward with AI?

It is evident that bias is not exclusive to AI decisioning. In fact, AI modelling could solve many of our current challenges with equitable decisioning.

However, to protect consumers and business integrity, a proportional, vigilant response is key. This can be achieved by only implementing AI decisioning built on the following principles:

  • Use syndicated data as the foundation: Unbiased AI-powered predictive analytics and decisioning depends on broad, deep context that is captured in real-time. Our National SIRA risk intelligence syndicate delivers a level of data diversity that mitigates potential biases that can occur with models build on more restricted portfolios.

  • Regularly recalibrate for proportional influence: Depending on external factors and your organisation’s risk appetite, the proportional influence of risk markers contributing to an overall risk score may fluctuate. Regularly recalibrating – by introducing recent data and refining rules – ensures that model characteristics “match reality”, preventing skewed decisioning.

  • Always include the human factor: Although people are equally capable of bias, human checks and balances have an important role to play in keeping processes ethical and compliant. It is vital that AI-powered decisioning is not allowed to run on autopilot but becomes a co-pilot to human knowledge.

 

Any strategy, process or outcome could become skewed if broader context, objectivity and accountability are not hard coded into a counter-fraud approach. And, given the complexity of risk evaluation – paired with a demand for faster, fairer outcomes – an AI co-pilot seems vital to the continued success of fraud teams.

Do you need help using AI decisioning in your fraud strategy? Contact a Synectics Fraud Strategy Consultant below to arrange a chat at your convenience.

Time to connect