
The second phase of the EU Artificial Intelligence Act officially came into force on August 2, marking a pivotal shift from policy to enforcement, and placing new obligations on developers and deployers of general-purpose AI systems (GPAI).
For QA and software testing teams across financial services, the message is clear: compliance is now a technical and operational imperative.
This latest phase focuses on transparency, requiring all GPAI providers to maintain technical documentation, adopt copyright-compliant data policies, and publish summaries of the datasets used to train AI models.
But the implications run deeper, especially in heavily regulated industries like banking and insurance, where AI models are rapidly being deployed in risk assessments, fraud detection, credit scoring, and customer service.
The EU AI Act, first proposed by the European Commission in 2021, is widely considered the world’s first comprehensive regulatory framework for artificial intelligence.
Following final approval by the Council of the European Union in May 2024, the Act was published in the EU’s Official Journal in July that year.
With a sprawling 50,000-word text spanning 180 recitals, 113 Articles, and 13 annexes, the legislation divides AI systems into four risk categories and introduces a strict compliance regime for those deemed high-risk, a category that includes many AI applications used in financial services.
“The framework is a holistic set of risk-based rules applicable to all players in the AI ecosystem,” said Brussels-based Elisabetta Righini, partner at Latham & Watkins. Like GDPR before it, the AI Act is expected to influence global practices far beyond the EU’s borders.
Also weighing in, Tendü Yoğurtçu, the CTO of Precisely, stressed the growing importance of strong data foundations to support compliance.
“Vendors and providers must demonstrate that their test data practices are transparent and aligned with regulatory expectations. To achieve this, organisations need to ensure their test data is AI-ready by investing in trusted data foundations that support traceability, accuracy, and compliance at scale.”
She warned that siloed, inconsistent, or outdated data can compromise AI reliability and bias mitigation, both major concerns under the new rules.
“Organisations must break down these silos and have an integrated view of all relevant data across on-premises, cloud, and hybrid environments,” she stressed.
“AI requirements must be validated through comprehensive testing across functional, performance, security, and stress layers, but also in algorithmic integrity.”
– Paul Mowat
Meanwhile, Daryl Elfield, a London-based partner at KPMG specialising in IT and quality engineering, underlined the importance of adapting QA and software testing protocols in line with the legislation.
“AI systems deemed high-risk will need to comply with strict standards concerning risk management, data quality, transparency, human oversight, and robustness,” he said. “Providers of AI systems, including those that develop AI for internal use, will be affected.”

Sharing his opinion, Paul Mowat, founder of Infinity Tech Consulting, added that financial services firms in particular must prepare for rigorous testing and documentation.
“A conformity assessment will assess how safe, accurate, and robust those systems are,” he explained. “Significant investment will be needed to comply with product governance, risk management, and internal audit capabilities.”
According to Mowat, accountability is the central expectation: “AI systems must be designed with intervention in mind so a human can override the system if needed.”
Opportunity for software testers
Elfield noted that ongoing monitoring is now mandatory for high-risk AI applications. “If an application falls under this category, then regular testing must take place to ensure accuracy, reliability, and security,” he said. This includes tracking system failures or breaches and implementing rapid remediation protocols.
Both Elfield and Mowat agreed that the AI Act presents a major opportunity for software testers, particularly in tackling the “black box” challenge of opaque AI decision-making. New responsibilities will involve validating models for fairness, bias, and explainability, often requiring retraining teams and deploying advanced audit techniques.

“AI requirements must be validated through comprehensive testing across functional, performance, security, and stress layers, but also in algorithmic integrity,” Mowat said.
For financial institutions, trust is paramount. “Trustworthy AI is critical for both regulatory compliance and public acceptance,” said Elfield.
He urged banks to establish governance frameworks that ensure AI is “reliable, unbiased, and explainable,” backed by quality management systems and robust human oversight.
Mowat added that banks will also need to explain their AI risk posture to regulators in detail. “This is a broad area requiring collaboration with legal teams, understanding suppliers, knowing how data is managed, and maintaining detailed documentation at every step.”
Phase three of the AI Act is scheduled for August 2026 and will bring further governance obligations, particularly for public sector applications.
But for private firms, including those in finance, the current phase already sets a demanding precedent, one that positions QA teams as central to ensuring both technical soundness and regulatory alignment.
Yoğurtçu believes the stakes will only rise: “By taking these steps, organisations can build trust in their AI outcomes and demonstrate compliance with the EU AI Act.”
NEW EVENT

Why not become a QA Financial subscriber?
It’s entirely FREE
* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGULATION & COMPLIANCE
Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.
READ MORE
- Why real-time monitoring and scenario testing are becoming core QA disciplines
- BankDhofar takes an automated approach to strengthen QA
- Banks warned AI still fails on real-world software testing tasks
- SEC’s AI emphasis drives new QA and testing imperatives for US banks
- Inside the chaos: The new reliability discipline reshaping banking QA
WATCH NOW

