ALTERNATIVE INTELLIGENCE

AI RECIPE FOR SUCCESS – What does responsible adoption of AI really mean in the context of the Consumer Duty?

18/01/2024

The FCA have stated that firms should “have scope to innovate whilst protecting consumers[1] – but what does this mean in the context of rapidly developing AI technology? In particular, how does the higher standard imposed by the new Consumer Duty, requiring firms to ensure they deliver good outcomes for their retail customers, foster innovation or hinder advancement in this area?

We consider some of the possible benefits and risks of AI for both consumers and firms, and the framework for this that is currently being developed by UK regulators.

The interplay between rapidly developing AI tools and the, as yet, largely unclear regulatory position can be a challenge for financial institutions looking to harness the new technology to their advantage.

How can AI help firms comply with the Consumer Duty?

In our view, AI has the potential to improve firms’ compliance with the requirements of the Consumer Duty. For example, firms could use it to improve their operational efficiency. AI can help firms categorise and organise customer data and feedback (and do so quickly). This data can be used to improve customer facing processes saving time and money and improving the customer experience. It may also help firms comply with the Consumer Duty by feeding into customer modelling – allowing a better understanding of specific consumer characteristics and needs – meaning that consumers benefit from tailored products and offerings.

Similarly, AI tools are already being used for the quicker and more effective detection of fraud and other financial crime, specifically by the increasing use (and complexity) of systems designed to detect suspicious activity.

AI tools are already being used for the quicker and more effective detection of fraud and other financial crime

However, conversely many of these apparent benefits bring risks. The FCA has openly spoken about the “responsible adoption” of AI – but what does this really mean?

Bad outcomes for consumers – where does risk lie?

We already know that the use of AI within firms is increasing and is predicted to exponentially grow over the next two years. Some of the key areas where firms with retail customers will need to be attentive, or risk bad consumer outcomes are:

  • Data Quality. The output of any AI system is only as good as the data which feeds it. Discrimination and bias caused by low-quality data input may result in bad outcomes for consumers.
  • Outsourcing and model risk management. Firms must have a plan to keep track of their outsourced functions and the use of externally provided algorithmic systems. We predict that oversight of outsourced systems (and AI use by third parties) is an area that the regulators will be focused on – and, in particular, how responsibility is apportioned in the context of the Consumer Duty.
  • Bad actors. The use of AI to cause harm in the financial services sector means we can expect an increase in consumer scams causing individual loss such as audio deepfake technology, scam calls and biometric theft. Firms themselves may also be at risk from AI-powered cyber-attacks. The sophistication of the systems used by firms to detect and protect consumers will need to develop at pace (and at expense).
  • Operational resilience. Increased reliance on AI for key business functions will become a key risk. Firms that fail to update their business continuity plans may expose their customers to disruption if their AI dependent systems fail.

So, what is on the legislative horizon?

Well, probably nothing legislative – at least for the time being. The FCA describes its current approach to consumer protection and AI as based on “a combination of the FCA’s Principles for Businesses, other high-level detailed rules, and guidance, including the Consumer Duty[2]. The UK government has been clear that it does not intend to introduce any AI-specific statute. Instead, it will focus on principles-based guidance which the UK financial regulators can adapt and implement.

The UK government has been clear that it does not intend to introduce any AI-specific statute. Instead, it will focus on principles-based guidance which the UK financial regulators can adapt and implement.

Most recently, a steer on the proposed approach to AI regulation has been provided by the publication of the collective response paper of the FCA, PRA and Bank of England on AI and Machine Learning in late October 2023 [3]. This response paper contains a summary of feedback on the proposals that have been put forward, and although it does not provide any concrete views or policy proposals from the regulators themselves, we expect this is the writing on the wall for their future approach.

CONCLUSION

AI will be a valuable tool for firms to transform their offerings to consumers going forward and help them deliver positive outcomes. Being mindful of the risks highlighted above and paying attention to the direction of regulatory scrutiny will ensure firms are well positioned to take advantage of the AI wave.

MEET THE AUTHORS

5 Articles

David Rundle

Partner, London
2 Articles

Siân Cowan

Senior Associate, London