The Reliance on Third Parties
A third of all current AI use cases deployed by respondents to the regulators’ AI survey are third-party implementations, up from 17% in the regulators’ equivalent 2022 survey. Risk and compliance was the business area with the second highest percentage of third-party implementations (64%), narrowly behind usage in HR (65%). The regulators anticipate that the use of third-party implementations will increase as AI models become more complex and outsourcing costs decrease.
Coupled with the likely increased reliance on third party providers in AI use, evidence suggests that firms have an inadequate understanding of how outsourced systems operate and are trained. For example, almost 50% of respondents in the Bank of England’s survey reported having only a partial understanding of the AI technologies they use, admitting “a lack of complete understanding” where these technologies are developed by third parties rather than internally.
This is problematic given the regulatory requirements governing the oversight of outsourced functions, which includes use of third party AI tools. Such oversight cannot be properly exercised where the host firm does not understand the system. Firms will need to consider how the fundamental differences in AI, compared with algorithmic models before it, impact the discharge of their responsibilities. Whist the typical safeguards contained in an outsourcing agreement, i.e. provisions that stipulate audit rights and business continuity arrangements, may mitigate emergent risks in the area, a new suite of protective solutions and governance arrangements will be required.
This need is particularly apparent in respect of data. AI-enabled tools that rely on poor quality or incomplete data will, necessarily, produce poor quality outcomes. Data governance and standards have never been more important and will be a fast-developing field in the context of AI models.
Similarly, firms will need to implement measures to protect against third party models generating biased results, a likely focus of regulatory concern. The FCA has cautioned that firms using AI-enabled tools should consider whether they may lead to worse outcomes for some groups of consumers, in breach of the Consumer Duty, due to these technologies embedding or amplifying bias. Bias can occur at any point from the creation of the algorithm to its deployment. Incorrect problem-framing or reliance on datasets that are not representative of the firms’ customer demographic will result in an inherently biased learning process and discriminatory outputs. Use of third-party models that rely on market-wide transfer learning in risk detection (i.e. leveraging pre-existing models trained on large financial datasets) may exacerbate this problem.
The use of third-party AI models may also present an accountability challenge, if not deficit, especially where the developers and providers are outside of the regulatory perimeter. Failures connected with the operation of financial crime systems and controls have already been a significant focus of regulatory enforcement action. The FCA’s issuance of Final Notices to Metro Bank Plc and Starling Bank Ltd in late 2024 are stark examples. However, there have not been any related actions taken against Senior Managers, which may speak to the evidential challenges presented by such cases. Whilst, establishing individual accountability may become more difficult in respect of AI systems which have a significant third party component, regulators may closely scrutinise firms’ standards of governance and oversight.
Firms will need to have staff with the requisite expertise and training, to ensure that auditing and oversight of the models, and their development, can be monitored effectively. They will also need to assess the appropriate extent to which “humans in the loop” should be incorporated into the operation of the model, i.e. where does the appropriate balance between efficiency and protection lie? Should failings emerge, unravelling what went wrong and who was ultimately responsible will be challenging due to the complexity of the models and number of parties feeding into the systems. Firms will need to be able to explain, in a meaningful way, the machine learning model and any resulting decisions, especially if they lead to consumer harm. Firms need also to design and implement feedback mechanisms to detect and prevent model drift, and enable prompt reaction to bias or new threats. These are significant questions, which will require input at a Board level.
Where an unacceptable accountability deficit emerges, the Regulator may shift its focus on the governance arrangements in firms, instead casting its perimeter wider, to include third party providers of AI financial crime systems within the regulatory fold. This has already been done in respect of critical third party providers, who supply material services to financial sector. That approach may be rolled out in other areas in which the industry’s dependence on third party providers for complex AI systems becomes entrenched.