Internet giant Google has published its annual DevOps Research and Assessment (DORA), which has revealed some interesting findings that may leave some within the QA space and wider software development community to scratch their heads.
While artificial intelligence in software testing is being rolled out at an unprecedented rate, with banks, finance firms and other companies rushing to embrace the technology, the DORA report found that it also appears to be slowing the rate at which software is being developed, delivered and implemented.
In fact, there has been a 1.5% drop in software delivery, compared to last year, and a record 7.2% decrease in delivery stability.

Google’s developer advocate and head of the DORA project, Nathen Harvey, said it is not entirely clear what causes these declines but he did stress “it is most probable that codes written by artificial intelligence platforms need to be fixed before applications are deployed in a production environment.”
This observation is underpinned by the finding that close to four in ten developers have little or no trust at all in coding that has been purely generated by AI.
“AI has positive impacts on many important individual and organizational factors which foster the conditions for high software delivery performance,” the report states.
“But, AI does not appear to be a panacea,” Harvey and his team wrote. “Respondents reported expectations that AI will have net-negative impacts on their careers, the environment, and society, as a whole.”
The future of AI does look promising, however.
Annapolis, Maryland-based Harvey stressed that as AI tools and solutions become more comprehensive and advanced, and QA engineering methodologies can be better interpreted, he expects a range of software delivery and testing issues to be ironed out.
“DORA metrics are both leading and lagging indicators of the impact DevOps is having.”
– Nathen Harvey
In the meantime, to Harvey, this means that software teams and QA professionals have no choice but to continue to review and test AI-generated codes.
This does not mean, however, organisations should abandon AI, he stressed. On the contrary, in many cases AI provides a real benefit, Harvey stated.
In fact, the DORA report stated that an overwhelming majority, more than three in four, of software development professionals now rely on AI for at least one daily professional task.
The report singled out documentation quality as a specific productivity gain (up by 7.5%), code quality (an increase of 3.4%), as well as code review speed (a jump of 3.1%).
Google’s annual DORA findings are eagerly watched within the QA space and the wider software development community as they track a range of DevOps metrics, including change lead times, deployment frequency, change failure rates and failed deployment recovery time.
“DORA metrics are both leading and lagging indicators of the impact DevOps is having on businesses,” stressed Harvey.
“Each organization should consider them within the context of their own business goals, but, in general, keeping track of these metrics will enable DevOps teams to reduce burnout while at the same time improving productivity,” he added.
Early days
Google’s latest DORA report echoes comments made by a range of industry observers in recent months, who have said that no matter how big the buzz gets around generative AI, the industry is still in the very early stages of adoption in the software quality/software testing industry.
In fact, when you ask most software testers or QA managers about the real impact of AI today, many are saying they haven’t seen a significant impact yet – at least not in the form of dollars and cents.
Prashant Mohan, senior director of product management at SmartBear, leading the company’s strategy and product direction for its testing portfolio, shares that view.

“Amidst all the buzz and excitement that many feel about the potential for generative AI in our industry, big uncertainties and anxieties continue to loom,” Mohan said.
“It seems like only yesterday that we finally put the debate to rest around, ‘No, test automation will not replace every manual tester’s job,’ and now we’re already mired in debating whether AI is going to take all of our jobs, including those held by automation engineers who previously felt irreplaceable,” he said.
To Mohan, it’s clear that testers’ reception of AI-driven changes in test automation tooling and processes will continue to reflect both scepticism and promise for the foreseeable future.
There’s valid scepticism of AI stemming from two notable concerns, argues Mohan. First, testers have been promised “revolutionary” AI solutions that will solve all their problems, and autonomously complete all of their testing with only the push of a button.
“Sounds familiar? Many vendors have made the same claims about automated testing for more than a decade,” said the engineer with an MBA who joined SmartBear over eight years ago.
“Yet, many of these promises haven’t quite delivered, leaving practitioners understandably doubtful,” he continued.
“AI will never be a silver bullet solution for all testing challenges, and every testing and QA team knows this.”
– Prashant Mohan
Often, the reality of AI in testing has not matched the hype, making it challenging to see its practical value in everyday testing scenarios.
“AI will never be a silver bullet solution for all testing challenges, and every testing and QA team knows this,” Mohan stated.
Second, it is not hard to understand why many testers today are apprehensive about AI and the potential threat it poses to their job.
He pointed out that manual testers have been down this road before, but now automation engineers are also seeing demos, reading research, and hearing the rumours of AI completing tasks at unprecedented speeds that could render ‘technical expertise’ redundant.
“It’s unsettling for anyone to feel like their relevance is in jeopardy,” Mohan stressed. “And, while the reality is that generative AI is going to disrupt many industries, there’s another reality that bears stating and repeating until it’s accepted by all as just as true: AI should be seen as a tool that significantly enhances the tester’s role rather than replaces it.”
Zooming in on generative AI, not unlike test automation in many regards, is a powerful tool to be leveraged by testers.
It has the ability to significantly accelerate the completion of tasks that provide no extra value or ROI when performed by a human, so that humans have more time, or, in some cases, any time at all) to do the tasks that are best suited for humans and are performed worse by machines.
“Lastly, the most innovative solutions to complex problems in testing will always intersect at the intelligent use of modern technologies and the indispensable human touch,” Mohan said.
“As both these notions swirl, let’s talk about where exactly AI is having a positive impact and delivering immediate value for testers right now.”
Test automation
The increasing demands placed on development and testing teams to deliver high quality software at unprecedented speeds continue to fuel a drive toward identifying areas for workflow improvement.
“Test automation plays a pivotal role in this pursuit,” Mohan noted.
“Testing broadly is highly pattern-based at scale. Once you identify a set of regression test cases, you perform the same actions over and over,” he added.
“Any pattern-based work where you are executing repetitive actions consistently and can be performed by a robot is a natural fit for AI.”
Mohan explained: “Imagine a test case written in natural language within Jira, where many testers spend their time. With a simple click, AI can fix grammar, think, Grammarly for testing, make the test case legible, and break it into manageable steps, enabling new testers who have never seen your application or test case before to execute it.”
After review and any needed adjustments, the test can be automated with another click, removing the need for scripting or recording.
“You’ve eliminated the task of creating an automation test! Even better, all this works seamlessly within a tester’s existing workflow, and there is no need to learn new processes or tools,” Mohan stressed.
“This realisation was my own personal “a-ha” moment of seeing AI’s potential in testing, and exploring where it can further assist, not replace, the talented, valued, users I speak with all the time and strive to support.”
‘Flakiness’
Another notable advancement for AI in testing is in reducing test flakiness, effectively addressing the instability of automated tests when there’s changes in the application under test or environment. “AI is helping ensure that tests remain intact and effective in these scenarios,” Mohan said.
Through “self-healing” tests, there are teams today maintaining large test sets effectively, even as software systems undergo continuous, iterative development and updates.
Additionally, AI-powered visual testing allows testers to quickly compare images and identify discrepancies following code changes.
“With AI, testers can automate this manual, time-consuming task, and eliminate issues that are easy to go unnoticed when only a human eye is hunting for them,” Mohan said.
While testers are beginning to utilise these AI-powered capabilities, no company can claim to have legitimately ‘perfected’ an AI solution, he added.
“Despite this, testers are experiencing incremental benefits, as AI accelerates the execution of tedious tasks, highlighting its increasing role in expediting workflows and driving efficiency.”
While ‘larger’ problems across the software development lifecycle might not be magically solved through AI, cutting down the time it takes to complete repetitive, often mundane tasks is certainly a noteworthy step forward in the right direction, Mohan shared.
Obstacles in QA

Another industry voice who argues that, despite the impressive technological advances, the presence “of AI has not always been positive” is Margarita Simonova, the founder and CEO of ILoveMyQA.com.
“AI does not just casually learn how to perform its various functions, it requires a vast amount of carefully selected data to be fed to it. All of this data has to be curated by QA professionals,” she stressed.
“They must ensure that the data being fed to the system is accurate, otherwise the AI model will be trained incorrectly.” Moreover, think of “mysterious methods,” as Simonova put it.
“In general, many people still do not fully understand how AI works. Even AI experts don’t always understand how the system is learning,” she noted.
“On top of that, the companies that make AI often try to keep their methods secret. Not fully understanding how AI works can become a big issue.”
“To keep up with AI, a QA professional needs to be constantly on their toes.”
– Margarita Simonova
Finally, AI is still rapidly evolving. “To keep up with AI, a QA professional needs to be constantly on their toes, continually learning about the latest trends,” Simonova highlighted.
“This includes learning about the latest AI models and how they have changed from their predecessors,” she continued.
“This means that despite being a tool to help QA professionals more efficiently do their jobs, AI also takes a lot of work to master and keep learning about,” Simonova concluded.
Complex role
Simonova’s sentiments are echoed by another industry veteran as Joseph Sorrentino, president of Lean Quality Systems, a quality assurance consulting firm, thinks the integration of AI into QA processes brings both significant opportunities and serious risks, particularly in safety-critical environments where lives and reputations are on the line.
As the software development life cycle (SDLC) and product development life cycle are essential frameworks in QA, they guide the development, testing, and deployment of products and software, Sorrentino argued.
“Each phase—from requirements gathering to design, implementation, testing, and maintenance—requires meticulous attention to detail and adherence to safety standards,” he wrote in a recent QM analysis.
Therefore, AI presents both opportunities and challenges at each stage. “AI can help identify and prioritize requirements by analysing large datasets,” Sorrentino pointed out.
“However, the inherent complexity and unpredictability of AI models raise concerns about whether these requirements can be met consistently, especially in safety-critical applications,” he was quick to add.
In terms of design, AI tools can optimise design processes, but the lack of transparency in AI decision-making poses a significant risk.
“Designers must ensure that AI-generated solutions are fully understood and meet all safety and regulatory requirements,” Sorrentino warned.
“AI models, particularly large language models (LLMs), are prone to ‘hallucinations.”
– Joseph Sorrentino
When it comes to implementation, integrating AI into QA processes can improve efficiency, but it also introduces the risk of errors.
“AI models, particularly large language models (LLMs), are prone to ‘hallucinations’—generating plausible but incorrect information—which could compromise the integrity of the product,” Sorrentino explained.
And then there is obviously the vital process of testing. “AI can assist in automating and accelerating testing processes, but its lack of reliability in producing consistent, repeatable results is a major drawback,” he pointed out.

An extension of this process is maintenance. “AI can help monitor systems in real-time, predicting potential failures before they occur,” he continued.
“However, the ability to trace and validate AI-driven decisions remains a challenge, making it difficult to ensure ongoing compliance with safety standards,” Sorrentino noted.
Sorrentino argued that different roles within an organisation view AI through various lenses, each with unique concerns and responsibilities.
Firstly, there are the CEOs and COOs. “For senior executives, AI is often seen as a tool for increasing efficiency and profit margins,” he said.
However, the reputational risks associated with AI failures, particularly in safety-critical industries, must be carefully weighed against potential gains, Sorrentino highlighted.
Meanwhile, QA managers are tasked with maintaining the highest standards of safety and compliance.
“They must navigate the challenges of integrating AI into existing QA processes while ensuring that these systems do not undermine the rigorous standards that their industries demand,” he pointed out.
In addition, there is the engineers and technicians. “Those on the ground implementing AI in QA processes face practical challenges, such as ensuring that AI tools are correctly configured and do not introduce errors,” Sorrentino explained.
“Human oversight remains critical, especially when AI is deployed in environments where safety is paramount,” he concluded.
NEXT MONTH IN SINGAPORE

REGISTRATION IS NOW OPEN FOR THE QA FINANCIAL FORUM SINGAPORE 2024
Test automation, data and software risk management in the era of AI
The QA Financial Forum launches in Singapore on November 6th, 2024, at the Tanglin club.
An invited audience of DevOps, testing and quality engineering leaders from financial firms will hear presentations from expert speakers.
Delegate places are free for employees of banks, insurance companies, capital market firms and trading venues.

QA FINANCIAL FORUM LONDON: RECAP
Last month, on September 11, QA Financial held the London conference of the QA Financial Forum, a global series of conference and networking meetings for software risk managers.
The agenda was designed to meet the needs of software testers working for banks and other financial firms working in regulated, complex markets.
Please check our special post-conference flipbook by clicking here.
READ MORE
- Cognizant drags rival Infosys to court over trade secrets
- Testaify claims tool is ‘100x faster than seasoned QA architect’
- Fast-growing Newgen sets sights on banks in Middle East
- ABN Amro hires nCino and CBA for digital upgrade
- QAFF London: Lloyds’ Richard Bishop on the rise of ‘green software’
Become a QA Financial subscriber – for FREE
* Receive our weekly newsletter * Receive priority invitations to our events *