
From cyber-threats to model failures and third-party vulnerabilities, the software-risk landscape facing banks is shifting faster than most QA teams can keep up with. The accelerating use of artificial intelligence, particularly generative AI, is reshaping everything from customer onboarding to fraud detection.
But as financial institutions integrate increasingly complex systems, regulators are sounding the alarm: without stronger testing, governance and oversight, software vulnerabilities could evolve into systemic risks.
It is against this backdrop that the Financial Stability Board (FSB) has published its latest assessment of AI adoption, testing and integration in finance.
The FSB is the international body created by the G20 in the aftermath of the global financial crisis to monitor emerging threats to financial stability. It brings together central banks, finance ministries and regulatory authorities from major economies, setting common standards and coordinating global policy responses.
Created in 2009 by the G20 after the global financial crisis, it is hosted by the Bank for International Settlements (BIS) in Basel, Switzerland.
While it does not write binding rules, the FSB shapes the direction of regulation, providing early warnings, risk analyses and frameworks that national regulators often adopt or reference.
AI brings new vulnerabilities
In its report, the FSB states that AI “is reshaping the financial sector, driving efficiency and innovation.” It highlights how AI is streamlining operations and supporting “more personalised financial products and services.”
But the report cautions that these benefits sit alongside a widening set of software-related vulnerabilities. Among these are third-party dependencies and service-provider concentration, cyber risks, and model risk, data quality and governance.
The FSB warns that institutions are increasingly reliant on a small group of technology and AI vendors, especially for generative-AI capabilities, and that this reliance may lead to correlated exposures across the financial sector.
The report notes that “FIs appear to be cautiously adopting GenAI with apparently limited use for critical functions … but … third-party service providers play a critical role in FIs’ development and deployment of effective GenAI applications.”
This concentration risk means a disruption or failure at a major AI provider could have cascading effects across multiple firms simultaneously.
The FSB also highlights the potential for malicious use of AI, stating that GenAI may “increase financial fraud and the ability of malicious actors to generate and spread disinformation in financial markets.”
The risk picture sharpens for QA
For QA and software-testing teams inside banks and financial-services firms, the FSB’s analysis reads like a blueprint of where scrutiny will intensify.
The vulnerabilities it identifies, from inadequate model governance and data-quality issues to over-reliance on vendors, point squarely at areas where robust testing frameworks are essential.
The report also underscores the challenge of inconsistent language and regulatory definitions across jurisdictions, stating that “definitions of AI vary widely.”
This inconsistency complicates compliance and creates blind spots, making internal clarity and documentation even more critical for testing and risk teams.
The threat of “common mode failure”, where multiple firms suffer the same software or AI model breakdown due to shared dependencies, is also rising. This places renewed emphasis on scenario-based resilience testing, adversarial model testing, vendor-risk audits, and end-to-end validation across the entire digital supply chain.
What regulators expect next
The FSB recommends that national authorities expand their monitoring of AI adoption by using the indicators it outlines, drawing on supervisory engagement, industry surveys and improved data collection.
It also calls for greater international coordination to close data gaps, harmonise definitions and align oversight where possible.
For financial institutions, this signals a future in which AI systems, and the software environments around them, will be examined not just for performance or explainability but for systemic and operational resilience. Expectations for evidence-based testing, risk controls and governance will intensify.
The message for QA and testing teams is clear: AI-driven systems can no longer be treated as innovative add-ons. They are rapidly becoming core infrastructure, and the risks they introduce are systemic, not peripheral.
As the FSB’s report makes clear, the next phase of AI oversight will focus on resilience, transparency, vendor concentration, model stability and cyber-security. Banks that strengthen their testing and governance frameworks now will be far better positioned as regulators begin translating these warnings into supervisory action across global markets.
Why not become a QA Financial subscriber?
It’s entirely FREE!
* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *


REGULATION & COMPLIANCE
Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.
READ MORE
- Inside JPMorgan’s $18bn QA push with OmniAI reshaping testing
- As AI takes hold, insurance firms face a new testing mandate
- K2view’s Amitai Richman calls out the ‘real bottleneck’ in healthcare and insurance
- AI in QA: how flexible testing is redefining assurance for financial firms
- Explainer: Why site reliability engineering is gaining momentum in banking
WATCH NOW

