The role of AI in software engineering is no longer an experiment, it is the norm. A new report from Google Cloud’s DevOps Research and Assessment (DORA) team shows that 90% of technology professionals now use AI in their daily workflows, with developers and testers spending a median of two hours a day on AI-assisted tasks.
For banks and financial services firms, the findings carry a sharp warning, however: while AI accelerates development, it also destabilises software delivery unless testing and QA practices keep pace.
The 2025 State of AI-Assisted Software Development report draws on nearly 5,000 survey responses and over 100 hours of interviews, and its central conclusion is loud, clear and stark: “AI is an amplifier.”
According to the report, AI magnifies the strengths of high-performing teams but exposes weaknesses in organisations struggling with legacy bottlenecks, compliance burdens, or inefficient processes.
Nathen Harvey, who leads the DORA team at Google Cloud, underscored how pervasive AI has become. “It’s almost to the point where we could have asked these technologists, ‘are you using a computer at work?’” he reportedly said.
Harvey pointed out that the research revealed a paradox: widespread adoption co-exists with skepticism. Thirty percent of respondents reported “a little” or “no trust at all” in AI-generated code, yet more than 80% said AI increased their productivity.
He added that programming fundamentals remain essential. “One of the most surprising insights from the study was that programming syntax memorization increased in perceived importance for the engineers surveyed,” Harvey explained.
Expanding the role of engineers
Also discussing the report, Ryan J. Salva, Google’s senior director of product management, argued that AI is changing what it means to be a software engineer.
“I expect a lot more people not just to participate in software development, but to get closer to the actual deployment of the software itself,” Salva told BI.

He pointed to shifts in product management: “Specification writing … is still an important aspect of the job, but product managers can now use AI to go a step further and build prototypes themselves quickly for demoing and testing.”
Salva also stressed that coding literacy still matters. “You are going to be entirely unsuccessful if you cannot read the language, at the very least. There are dozens, if not hundreds, of programming languages out there. One needs to be able to read the book.”
For QA leaders in banks, the most important part of the DORA findings is its impact on stability. The report found that AI adoption boosts throughput, teams are shipping faster, but it “still increases delivery instability,” leaving systems vulnerable to errors and failures.
The report further highlights seven team archetypes, from “harmonious high-achievers” to those mired in a “legacy bottleneck.”
For many financial institutions weighed down by regulatory and architectural constraints, the risk is that AI accelerates development in unstable environments, amplifying fragility rather than resilience.
The authors concluded: “Successful AI adoption is a systems problem, not a tools problem.” They argued that banks must strengthen their internal platforms, workflows, and validation processes if they want AI to create sustainable gains in productivity.
Looking ahead
Finally, Gene Kim, co-author of The Phoenix Project and long-time DORA collaborator, drew a parallel between the DevOps revolution a decade ago and today’s AI shift.

“Yes, I’ve seen and experienced how using AI can lead to problems, everything from silently deleted tests, obviously broken functionality, and even deleted production data. But I’ve also seen AI used to massively improve outcomes,” Kim wrote in the report’s foreword.
He added: “We concluded that when AI dramatically accelerates software development, our control systems must also speed up. In other words, a decade of DORA research has likely already shown the entire software development industry practices must evolve.”
For banks and insurers, where compliance with frameworks like DORA and the EU AI Act are non-negotiable, the findings are clear. AI adoption is unavoidable, but without robust, continuous testing and resilient QA pipelines, it risks multiplying instability.
As Harvey concluded: “Once an application has been released, users’ feedback will encourage, or force, you to make improvements. We need to have the right capabilities and conditions in place that allow these teams to drive successful outcomes in a sustainable manner”.
QA FINANCIAL PODCASTS

Listen to Sudeepta Guchhait on Nasdaq’s new Mimic AI testing platform
QA Financial sits down with Sudeepta Guchhait, Senior Director of Product Framework & Quality Engineering at Nasdaq
——–
Listen to Wesley Scheffel and Robin Rain on Schroders’ DevOps strategy
We catch up with Wesley Scheffel, Head of Cloud Platform and Product Engineering at Schroders, and Robin Rain, Head of Cloud Platform Architecture
——–
Listen to Citi’s Jason Morris on Lightspeed and the future of continuous delivery
Jason Morris, Head of Developer Pipelines for Securities Markets and Banking at Citi, talks about Lightspeed
THIS NOVEMBER

Why not become a QA Financial subscriber?
It’s entirely FREE
* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *
REGULATION & COMPLIANCE
Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.
READ MORE
- Why real-time monitoring and scenario testing are becoming core QA disciplines
- BankDhofar takes an automated approach to strengthen QA
- Banks warned AI still fails on real-world software testing tasks
- SEC’s AI emphasis drives new QA and testing imperatives for US banks
- Inside the chaos: The new reliability discipline reshaping banking QA
WATCH NOW

