Can the UK’s legal system hold AI models to account, or must it evolve first?

“The Royal Courts of Justice” by Mike Peel, licensed under CC BY-SA 4.0. Source: Wikimedia Commons. 

In the United Kingdom, legislative and regulatory bodies adopt a principles-based approach to oversight. This means that each legal person must assess individually to what extent these principles apply to its operations and implement measures accordingly. However, as will become clear in the following, AI might eventually compel the UK to make an exception to this tradition. After outlining existing frameworks of accountability, this article examines AI use cases in the financial sector, followed by a brief survey of regulatory approaches in the UK, United States, and European Union. Even though examples are drawn from finance, the relevance of AI governance extends beyond this sector.

LEGAL ACCOUNTABILITY IN THE UK

As derived from several key statutes, legal accountability in the UK presupposes human decision-making. Even though companies are treated as separate legal entities, the duties to manage and justify the company’s conduct primarily fall upon its directors. The Companies Act 2006 describes these duties and requires, for instance, the exercise of independent judgment (s.173(1)), reasonable care, skill, and diligence (s.174(1)). Furthermore, it imposes a general obligation to promote the company’s success in good faith (s.172(1)). These duties are owed only to the company, not external bodies or individuals (s.170(1)). In practice, statutory obligations to report the company’s affairs and performance to regulators and shareholders make them the foundation of corporate accountability.

In relation to the preparation of strategic reports, the Companies Act requires a ‘fair review of the company’s business’ and ‘description of the principal risks and uncertainties facing the company.’ (s.414C(2)(a)–(b)) However, these requirements assume that the risks in question can be both understood and explained. If advanced AI models become involved in advising or even taking operational decisions pertaining to the company’s affairs, unidentified algorithmic bias or training-data-related risk may limit the completeness and reliability of strategic reports. This has led legal scholars to propose new auditing and governance measures such as reviewing algorithms through ethical auditing frameworks. When taken further and made a regulatory requirement, such frameworks could shift the focus of accountability and explainability from outcomes to designs. Consequently, the legal quality of an action advised by an algorithm would be assessed not after the event but rather beforehand through the parameters within which the algorithm operates. Any technical regulatory oversight could thus unintentionally allow harmful decisions beyond legal recourse.

AI USE CASES IN FINANCE

The fact that many of the UK’s largest banks have already begun integrating AI into their internal operations illustrates the topicality of the issues raised above. For instance, Lloyds Banking Group has announced that over 800 different AI models are in production within the organisation. Among them is an algorithm that reportedly reduces the income verification stage in mortgage applications ‘from days to seconds’. It is important to emphasise that firms are currently not required to publicise detailed accounts of how their algorithms operate or the kind of data they use. Rather, the Financial Conduct Authority (FCA) — mandated under the Financial Services and Markets Act 2000 to regulate financial services firms with respect to consumer protection, effective competition, and market integrity (s.1B(3)(a)–(c)) — expects AI systems to be integrated into existing frameworks of risk management and compliance. This is reflected in Lloyds Bank’s Responsible AI programme. Other UK banks currently experimenting with AI models include HSBC and NatWest. While the former reports to operate algorithms in over 600 different use cases, most notably fraud detection, cyber security and risk assessment, the latter has announced a collaboration with OpenAI to improve its consumer-facing digital assistant.

Even though use cases like the above are carefully constructed, the explainability of AI models is widely considered to be one of the most significant regulatory challenges. This becomes especially clear when noting that algorithmic sophistication and opacity tend to be positively correlated. Seeking to advance the discussion on the degree to which AI usage should be explained to customers, the FCA published a case study regarding consumer credit decision-making on 24 February 2025. The authors find that excessive information can in fact overburden consumers and restrict their ability to challenge errors in their credit application outcomes. Since the paper only examines credit decisions, they advocate for testing different genres of explainability in different contexts.

In the context of strategic or operational decision-making, the integration of AI systems may gradually redefine the scope of directors’ statutory obligation to promote the success of the company through independent judgement. This is illustrated by Banco Santander’s global ambition to become a fully “AI-native” bank where algorithms are integrated into all areas of the business, especially core functions such as operations, product management, credit, service and marketing. Consequently, AI may induce a growing divergence between legally assigned duties and the practical responsibilities involved in fulfilling them.

EXISTING REGULATORY APPROACHES IN THE UK, US, AND EU

In line with the duties of directors discussed above, financial regulation in the UK is principles-based. Rather than prescribing specific rules, the FCA imposes broad obligations such as that a firm must ‘take reasonable care to organise and control its affairs responsibly and effectively, with adequate risk management systems.’ (PRIN 2.1.1R – Principle 3)The Prudential Regulation Authority (PRA) — mandated under the Financial Services and Markets Act 2000 to promote the ‘safety and soundness of PRA-authorised persons’ (s.2B(2)), namely, companies or organisations whose failure could threaten the stability of the financial system — sets out principles for model risk management more explicitly. For instance, it expects firms to identify and manage ‘risks associated with the use of artificial intelligence […] to the extent that it applies to the use of models more generally’ (SS1/23). Both the FCA’s and PRA’s principles are technology neutral, and neither authority enforces any AI-specific regulation at present. Therefore, each firm must interpret the degree to which existing principles constrain AI usage individually and implement governance measures accordingly. However, this approach may prove unsustainable as the growing complexity and opacity of algorithms make assessing risks associated with them increasingly difficult. Furthermore, firms are currently incentivised to interpret AI-related risks narrowly for competitive reasons. A standardised, FCA- or PRA-supervised framework could, in this respect, ensure more substantial oversight across the financial sector.

In the United States, AI regulation resembles the UK’s approach to the extent that principles for governance guide oversight. These principles, however, do not constitute codified law and merely represent recommendations issued by the National Institute of Standards and Technology. In practice, AI policy remains fragmented, prompting individual states to introduce legislation for their respective jurisdictions.

The EU AI Act adopted in 2024 and set to be fully in force by 2 August 2026 contrasts sharply with the approaches taken in the UK and the US. It applies to all sectors, not only financial services, and classifies AI systems according to different levels of risk depending on their intended purpose and field of application (Art. 6; Annex III). While systems for some use cases are prohibited entirely, those classified as high-risk must be registered in a central database (Art. 49; Art. 71) and undergo a conformity assessment to demonstrate compliance with regulatory standards before being placed on the market or put into service (Art. 16(1)(f); Art. 43(1)).

Finally, it must be emphasised that no degree of human diligence can fully prevent or explain unanticipated consequences of decisions taken or advised by AI. However, the fact that principles-based regulation grants firms significant freedom in how they interpret and manage risk creates room for AI-related risk to be addressed narrowly and inconsistently. Therefore, the UK may eventually have to make an exception and pivot towards a more centralised and design-based governance model for algorithms. Even though this might contrast with a pro-innovation stance by introducing an additional bureaucratic burden for firms — as will be the case with the EU Act — it may ultimately represent the only viable means to ensure a desirable degree of oversight within and across different sectors.

Previous
Previous

The Consequences of The AWS Outage: In Conversation with Dr. Jamie Woodcock

Next
Next

Britain Must Say Yes to Building Beautifully