What is the UK’s current position on AI regulation?
Almost a year ago, in February 2024, the previous Conservative government published its written response to the feedback it received as part of its consultation on its March 2023 White Paper on the approach to regulating AI. This response reiterated that the UK would not pursue specific legislative change to respond to the explosion in AI innovation, preferring a principles-based approach based around an over-arching framework of five cross-sectoral principles for “trustworthy AI”. Key regulators were asked to publish their individual strategic approaches to managing the risks presented by AI by 30 April 2024. We considered the financial regulators’ responses in our previous Emerging Themes article AI in Principle, but, in short, they concurred with the government’s approach, concluding that compliance with the AI principles could be met within the myriad of existing financial services legislation and regulation with just a bit more work.
The curve-ball in the form of Lord Chris Holmes’ Private Member’s Artificial Intelligence (Regulation) Bill introduced in December 2023 reached the House of Commons, but fell by the wayside when parliament was prorogued last May and has not (yet) been resurrected. A new Private Member’s Bill to regulate the use of AI systems in "decision-making" processes in the public sector was however tabled in September 2024, which is now at committee stage.
So, the position remains that there are no overarching regulations governing AI in the UK. However, there are some AI-applicable provisions in existing legislation, including those in the Data Protection Act 2018 that currently preclude automated decision-making, although the Data (Use and Access) Bill introduced in October 2024 is looking to relax the rules on automated decision-making, which would benefit AI systems.
Is that position set to change?
The Labour government used its first King’s Speech on 17 July 2024 to headline that it would "establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” and, within the background briefing notes, “harness the power of artificial intelligence as we look to strengthen safety frameworks”. A promise repeated in October 2024 by Technology Secretary Peter Kyle, stating that the UK government will bring legislation to “safeguard against the risks of artificial intelligence” within the next year. Before the election, Kyle indicated that Labour would introduce a statutory regime requiring AI companies to share test data, but indicated they would be more targeted to high-risk systems.
The scope of the government’s AI ambitions has now been laid out in its January 2025 AI Opportunities Action Plan. This lays out the vision for the UK to shape the AI revolution, by investing in the foundations of AI (improved data infrastructure, investment in compute resources and AI talent, and establishment of AI growth zones), driving adoption of AI in the public sector, and making the UK an attractive location for AI investment. The Plan continues to walk the careful tightrope between supporting drivers for AI innovation and investment in the UK noting that “the UK’s current pro-innovation approach to regulation is a source of strength relative to other more regulated jurisdictions and we should be careful to preserve this” whilst protecting UK citizens, specifically identifying that safe development and adoption of AI should be achieved without “blocking the path towards AI’s transformative potential”. This suggests that we should not expect to see wholesale AI regulations being introduced this year, but more targeted interventions to support text and data mining and provide clarity as to how advanced frontier AI models will be regulated.
When it comes to financial services specifically, the financial regulators have not (publicly at least) changed their stance since April 2024. However, that is not to say that the regulators have been idle in this important area. And the new Action Plan suggests more funding for regulators to scale up their AI capabilities, and requirements for them to focus on enabling safe AI innovation and to publish annual reports indicating how they are enabling innovation and growth driven by AI in their sector. The UK’s new strategy also envisages appointment of an AI sector champion for the financial services sector to develop AI adoption plans for the sector, to be identified in Summer 2025.
In October 2024, the FCA launched its new AI lab, which aims to help financial services firms navigate the challenges of AI and support them as they develop new AI models and solutions. As part of this, the regulator offers its AI Input Zone for stakeholders to have their say on the future of AI in UK financial services through an online feedback platform.
In November 2024, the Bank of England also announced its AI Consortium to provide a platform for “public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of artificial intelligence (AI) in UK financial services”. Although the Consortium will not have decision-making capacity or be obliged to to act upon discussions, it aims to (i) identify use-cases for AI in UK financial services; (ii) discuss the benefits, risks, and challenges of AI in relation to firms and also the wider financial system; and (iii) inform the Bank’s ongoing approach to the safe adoption of AI. This valuable stakeholder engagement will inform the UK’s approach to AI regulation.
How does the UK’s position compare internationally?
It is safe to say the position internationally remains disparate and fragmented.