AI testing surging despite many QA teams stuck in early maturity

BrowserStack founders Nakul Aggarwal and Nitesh Arora (right)

AI in software testing is rapidly becoming a default expectation across modern engineering organisations, but the gap between adoption and real operational maturity is widening, particularly for complex, regulated industries such as banking, insurance, and capital markets.

For QA and software testing teams inside financial services firms, the pressure is intensifying on multiple fronts: accelerating release cycles, rising dependency on third parties, increasing scrutiny from regulators, and the growing complexity of AI-enabled systems themselves.

In that environment, AI-driven testing is often framed as the next major productivity leap. Yet many teams are discovering that simply introducing AI tools does not automatically translate into stronger assurance, better governance, or scalable automation.

A new industry report highlights this tension, showing that while AI has become central to modern testing strategies, most organisations remain stuck in early stages of maturity, constrained by fragmented workflows, uneven integration, and unresolved challenges around operational readiness.

Integration is the real bottleneck

BrowserStack this week released its State of AI in Software Testing 2026 report, based on insights from more than 250 software testing leaders worldwide.

The findings point to what the company describes as a widening gap between AI adoption and operational maturity.

“Too many teams think adopting AI is the finish line, when it’s really the starting point,” said Nakul Aggarwal, co-founder and CTO of BrowserStack. “

The real work is integrating it into everyday workflows, training teams well, and building systems that scale. That separates meaningful progress from surface-level automation.”

The report found that 94 per cent of teams now use AI in testing in some form, but only 12 per cent have reached full autonomy, suggesting that most organisations are still operating in hybrid environments where AI supports discrete tasks rather than driving end-to-end testing execution.


Nakul Aggarwal

“Too many teams think adopting AI is the finish line, when it’s really the starting point,”.

– Nakul Aggarwal

For banks and insurers, that distinction matters. Financial services QA teams operate under strict resilience, security, and compliance expectations, where automation must be predictable, auditable, and deeply embedded into delivery pipelines.

Surface-level AI adoption, without workflow integration and governance, can introduce new forms of risk, including blind spots in coverage, inconsistent decision-making, and poorly controlled automation outcomes.

Integration emerged as the most commonly cited barrier. Thirty-seven per cent of teams said integrating AI tools into existing workflows is their primary challenge, surpassing concerns around cost and skills.

That reflects a reality familiar to many large financial institutions, where testing ecosystems are often built on legacy platforms, distributed delivery teams, and complex tooling landscapes that resist rapid transformation.

Maturity remains uneven

Despite those barriers, investment is accelerating. BrowserStack found that 88 per cent of teams plan to increase AI testing budgets by more than 10 per cent next year, with nearly one in four expecting increases above 25 per cent.

Yet the report suggests that spending alone does not guarantee maturity, particularly if foundational issues such as training, integration, and scalable processes remain unresolved.

The report also highlights that returns are real but uneven. Sixty-four per cent of organisations reported ROI exceeding 51 per cent from AI testing, and those using AI for four or more years were 83 per cent more likely to see returns over 100 per cent.

That suggests the biggest gains accrue not from experimentation, but from sustained operational embedding over multiple years.

Teams are most commonly using AI for test case generation, test data creation, and automated maintenance — areas that can reduce manual effort and accelerate releases.

But for financial services firms, these use cases also raise important governance questions around data integrity, reproducibility, explainability, and control over automated changes in regulated environments.

The report’s broader message is that AI in testing is no longer a question of adoption, but of execution discipline. For banks, insurers, and other financial services companies, the next phase will depend on closing integration gaps, building internal capability, and ensuring AI-enabled QA strengthens, rather than fragments, resilience and assurance.

As Aggarwal put it, adopting AI is only the starting point. The real challenge is building systems that scale.


QA FINANCIAL EVENTS



Why not become a QA Financial subscriber?

It’s entirely FREE

* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGISTER HERE TODAY


REGULATION & COMPLIANCE

Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.


READ MORE


WATCH NOW


QA FINANCIAL PODCASTS

CLICK HERE TO LISTEN TO OUR EXCLUSIVE CONVERSATIONS