BoE sharpens focus on AI governance and testing in financial services

The Bank of England in the City of London

The Bank of England and its Prudential Regulation Authority have stepped up engagement with regulated financial firms on the adoption and oversight of artificial intelligence, highlighting both industry support for current frameworks and emerging concerns about model governance, risk management and implementation practices.

The outcomes of a series of 2025 roundtables and discussions, including sessions dedicated to AI and machine learning model risk, are shaping evolving expectations for responsible use within banks, insurers and other firms.

Across three roundtables held in late 2025 under the PRA’s auspices, representatives from a range of regulated sectors, including challenger banks, UK focused larger banks, global systemically important banks and insurers, came together to discuss the opportunities and challenges presented by AI adoption.

Observers from the Financial Conduct Authority and HM Treasury also participated, reflecting cross regulatory interest in how firms are bringing AI into core functions.

Participants broadly expressed support for the PRA’s current regulatory framework as it relates to AI, noting that the regulator’s principles based, outcomes focused policy and supervisory statements provided firms with sufficient space to innovate while upholding sound risk practices.

Supervisory Statement 1 23 on Model Risk Management was singled out by several attendees as a pragmatic enabler for responsible AI adoption.

Yet the discussions also surfaced constraints and practical hurdles. Firms remarked that second line risk functions continue to approach the use of AI with caution, suggesting that existing model risk management approaches may not be sustainable as AI and more autonomous systems proliferate.

This echoes broader industry feedback that governance frameworks are struggling to keep pace with rapidly evolving technology.

In separate PRA hosted sessions in October 2025, chief risk officers and senior model risk professionals from 21 regulated entities engaged with supervisors on the adoption of AI and ML technologies in the context of implementing supervisory expectations in SS1 23.

These conversations focused on how firms are applying model governance, validation, explainability and oversight of third party AI or outsourced models, reinforcing the regulator’s focus on governance and risk controls rather than AI simply as an innovation tool.

The message to technology teams, QA professionals and testing functions is clear effective model risk management and rigorous testing practices must keep pace with AI adoption.

That means not only ensuring that AI systems deliver expected outcomes, but also documenting governance frameworks, validation results and control structures that support responsible use and are ready for supervisory examination.

Governance and risk constraints

Specific governance and risk constraints were prominent in the discussions. Firms noted that traditional approaches to model risk, often built around static, well understood statistical models, may not be sustainable in contexts where AI and agentic systems change behaviour rapidly and introduce opacity or uncertainty.

This creates an imperative for QA teams to develop new validation strategies for algorithmic performance, transparency and robustness in the face of evolving data inputs and decision logic.

Despite the cautious industry tone, many participants did not yet see the need for detailed AI specific regulatory guidance or prescriptive rules, instead favouring supervisory sharing of observations on good practice and opportunities to define what responsible adoption looks like.

This reflects a wider theme in regulatory engagement that balances enabling innovation with safeguarding operational resilience and governance.

For QA and software testing teams in banks and financial firms, these discussions suggest several areas of heightened supervisory interest governance around model selection and validation, explainability and auditability of AI outputs, alignment of risk management frameworks with technology lifecycles and the capability to monitor models for drift, bias or failure over time.

In practice, quality assurance now encompasses not just software correctness, but model performance, resilience and adherence to risk principles under regulatory expectations.

The PRA’s engagement with firms indicates that as AI becomes embedded deeper in front, middle and back office functions, governance, testing and control frameworks will be central to responsible adoption.

Effective quality assurance in this environment involves more than validating outputs it is about documenting robust governance practices, evidencing controls and preparing for supervisory scrutiny focused on how AI systems are managed, tested and governed in real world operational settings.


NEXT MONTH


THIS MAY


Why not become a QA Financial subscriber?

It’s entirely FREE

* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGISTER HERE TODAY


REGULATION & COMPLIANCE

Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.


READ MORE


WATCH NOW


QA FINANCIAL PODCASTS

CLICK HERE TO LISTEN TO OUR EXCLUSIVE CONVERSATIONS