Banking’s AI shift becomes a regulatory stability test amid tougher QA controls

Tao Zhang has been Chief APAC Rep of the BIS since September 2022

Artificial intelligence is no longer being treated as a side experiment in banking. Regulators and supervisors are increasingly framing AI as a core operational and prudential risk issue, one that demands the same level of governance, validation, and resilience testing as credit models, liquidity systems, or payments infrastructure.

That message was made clear recently when the Bank for International Settlements highlighted the growing financial stability implications of AI across banking and digital finance.

Tao Zhang, BIS Chief Representative for Asia and the Pacific, said AI was already spreading rapidly through the financial system.

“AI is being adopted across the financial sector … to process large volumes of data, support credit underwriting, detect fraud, manage risks and automate back-office functions,” Zhang told a conference in Hong Kong, pointing to the accelerating reach of AI into core banking activity.

Tao Zhang

For QA and software testing teams, the regulatory significance is that AI is no longer viewed only as an innovation layer. It is increasingly part of the operational core, and therefore part of the stability perimeter.

Zhang warned that advances in large language models and generative AI were extending AI’s role beyond traditional analytics into new supervisory and decision-support territory, including “customer interaction, internal analysis and supervisory processes.”

From a financial stability perspective, the concern is not simply whether AI works in isolation, but how it behaves at scale, across interconnected institutions, and under stress.

“AI and digital finance … may affect financial stability through multiple channels,” Zhang stressed.

He highlighted the way shared models, common infrastructures, and operational dependencies could amplify shocks across the system.

That framing is critical for banking QA: AI risk is not just about accuracy or automation efficiency, but about correlated outcomes, systemic dependencies, and resilience under disruption.

European regulatory moves

The BIS warnings are now being mirrored directly in European supervisory priorities, where AI governance, digital strategy, and ICT risk management have been elevated as core expectations for the 2026–2028 period, as set out in a policy document published only weeks ago.

European banking supervisors stressed that banks’ digital and AI strategies must “effectively reflect opportunities and risks stemming from the related applications and set up robust governance and risk controls to manage the underlying risks.”

In other words, supervisors are no longer satisfied with banks simply adopting AI tools. They expect structured governance, risk controls, and testing discipline around those deployments, particularly as generative AI becomes more embedded in business workflows.

Supervisors also signalled that AI assessments will shift from broad monitoring to more targeted reviews of specific use cases, including generative AI.

They said they would engage banks to better understand where AI is being deployed and assess its impact “from a micro-prudential risk perspective.”

For QA and software testing teams, this implies that AI validation is becoming inseparable from supervisory expectations around model governance, ICT resilience, and operational continuity.

Supervisors also warned that reporting and risk data aggregation frameworks must support “sound governance and effective decision-making”, a reminder that AI systems can mask deeper weaknesses if controls and data quality are not robust.

‘No longer just a tech issue’

Industry practitioners have started interpreting this regulatory shift as a fundamental change in how AI risk will be treated. And not just in Europe.

Amit Ranjan, Chief Manager at Punjab National Bank in India, said AI systems were increasingly being used across banking decision-making.

Amit Ranjan

“AI systems are increasingly being used in credit decisioning, fraud monitoring, liquidity forecasting and customer risk profiling,” Ranjan wrote on LinkedIn this week.

The regulatory consequence is that supervisors are now pushing these systems into the same governance perimeter as other material financial models.

Ranjan noted that supervisors expect AI to fall under model risk governance frameworks because “any AI-driven decision influencing financial outcomes is treated as a ‘material model’.”

That expectation has major implications for QA teams. It means AI testing is no longer just functional testing of software outputs. It becomes part of prudential model validation, including fairness, robustness, explainability, and scenario performance.

Ranjan also captured the broader supervisory direction in blunt terms. “AI failure may no longer be seen as a technology issue,” he warned. “It could be treated as a prudential risk to financial stability.”

That is the regulatory pivot: AI risk is moving from IT governance into the financial stability framework.

Next phase

The next phase of AI oversight will likely focus less on whether AI can improve efficiency, and more on whether AI can create new systemic vulnerabilities.

Ranjan said boards should begin asking deeper forward-looking questions: “Can AI-driven decisions create correlated risks across portfolios? Are third-party AI dependencies captured in risk appetite frameworks? Should AI performance degradation be included in stress scenarios?”

These are QA and resilience questions as much as governance ones. They imply that AI testing must include degradation testing, dependency mapping, third-party assurance, and scenario-based validation.

The risk is not only that one AI model fails, but that many institutions rely on similar algorithms, data pipelines, or vendor platforms, producing synchronized behaviour under stress.

Ranjan’s warning is explicitly systemic: “The next systemic risk may not originate from markets, but from synchronised algorithms making similar decisions at scale.”

AI investment pressure

The regulatory shift is occurring alongside major economic pressure.

Analysts at JPMorgan Chase this week suggested that the pace of AI spending required in financial services could force smaller banks into mergers, as competitive demands favour institutions that can afford large-scale AI investment.

Competitive pressures are driving banks to embed AI deeper into compliance, treasury, risk, and payments infrastructure, accelerating the move from pilot deployments to production systems.

The market evidence is that AI is becoming a baseline expectation, not a differentiator, which means regulators are treating AI governance as a baseline safety requirement, not an optional innovation control.

Some banks are already reporting industrial-scale deployment.

TD Bank Group implemented 75 AI use cases in 2025, with CEO Raymond Chun saying: “These use cases span from transforming loan underwriting to creating intelligent leads to deepening relationships to meet more of our clients’ needs.”

Bank of America, meanwhile, also described AI being used inside payments operations, with the bank saying its tool allows employees to pose “simple to complex” client queries and get answers within seconds.

Regulatory direction

For QA, testing, and digital resilience leaders in banking, the regulatory direction is clear. AI must be treated as a governed, tested, resilience-critical component of banking operations, not as a black-box accelerator bolted onto legacy processes.

The BIS is warning about systemic stability channels, while European supervisors are embedding AI governance inside ICT risk and operational resilience priorities.

And industry leaders like Amit Ranjan are signalling that AI systems influencing financial outcomes are now “material models” in the supervisory sense.

The practical testing implications are substantial: AI validation must extend beyond accuracy into robustness, explainability, and stress behaviour.

Digital resilience testing must include AI degradation scenarios and third-party model dependencies, while operational risk frameworks must capture AI as part of critical service continuity, not just IT change management.

In this emerging environment, QA teams are no longer simply supporting AI adoption. They are becoming a frontline control function in ensuring that AI strengthens, rather than destabilises, the financial system.


QA FINANCIAL EVENTS



Why not become a QA Financial subscriber?

It’s entirely FREE

* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGISTER HERE TODAY


REGULATION & COMPLIANCE

Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.


READ MORE


WATCH NOW


QA FINANCIAL PODCASTS

CLICK HERE TO LISTEN TO OUR EXCLUSIVE CONVERSATIONS