A new wave of intelligent automation is reshaping how enterprises think about process efficiency, and software testing and QA teams are right in the middle of it.
At a recent roundtable titled called ‘Smarter, Faster, Riskier? Rethinking Automation in the Age of Agentic AI,’ AI and automation experts gathered to unpack what agentic AI really means for enterprise workflows, and what it demands in terms of quality assurance, governance, and risk management.
The panel, moderated by Blueprint CEO Dan Shimmerman, featured Douglas Heintzman, the chief executive of Syncura, Dr. Jenya Doudareva, AI Governance Lead at Canada Life, as well as Dr. Pramila Nathan, a GenAI strategist and AI transformation advisor to a range of banks and several Fortune 500 companies.
Their message to engineers and QA testing professionals within the finance space was clear: agentic AI offers powerful capabilities, but without proper oversight, it can introduce significant complexity and operational risk.

Unlike traditional rule-based automation or RPA, agentic AI involves autonomous agents that are goal-directed, capable of sequencing tasks, reasoning across systems, and even adjusting their own goals.
These agents don’t just follow scripts: they operate independently within a defined scope. This marks a major shift in how software automation is deployed and validated.
Dr. Nathan offered a real-world example: a network of AI agents used for HR succession planning, where each agent builds on the insights of the others, analysing profiles, making career path suggestions, collecting feedback, and adapting recommendations over time.
Such complexity poses fresh challenges for testing environments, where reproducibility, traceability, and validation are essential.
“Agentic AI isn’t just about automation anymore, it’s about collaboration,” Dr. Nathan said. “These systems make decisions that impact people and businesses. Testing them requires a whole new level of scrutiny.”
Risks multiply as autonomy increases
The panelists were clear: Agentic AI is not a panacea. It’s computationally expensive, often opaque, and if implemented without discipline, can magnify existing weaknesses in data quality, logic design, and infrastructure.
“There’s a temptation to throw Agentic AI at everything,” warned Heintzman. “But many of these models still hallucinate logic chains or fabricate reasoning. That’s a huge issue when decisions need to be explainable, like in financial services or healthcare.”
Dr. Doudareva added that Agentic AI doesn’t create new risks, it amplifies old ones. “If your data pipelines are weak or your business rules are poorly defined, Agentic AI will only expose those flaws faster — and on a larger scale,” she noted.
For QA teams, this means test coverage must expand beyond input-output validation. It requires evaluating agent behavior across unpredictable scenarios, assessing dynamic goal alignment, and monitoring model drift over time.

According to the panel, the best applications of Agentic AI are in domains that require contextual awareness and adaptive reasoning, areas where RPA or deterministic automation fall short.
Examples include fraud detection in finance, where response strategies must adapt to evolving patterns, triage systems in healthcare, prioritizing cases based on dynamic patient data, contract compliance in legal operations, interpreting ambiguous language, predictive maintenance in logistics, balancing machine learning and decision logic, as well as HR development, offering personalised, evolving employee guidance.
“Agentic AI is not replacing humans,” said Shimmerman. “It is augmenting them, especially in the high-cognitive-load, low-reward decisions we’re already struggling to scale.”
Importantly, the panel emphasised that agentic AI is not a replacement for robotic process automation. Simple, stable processes still benefit from RPA’s speed, predictability, and cost-efficiency.
“What we’re seeing is not a binary switch,” said Dr. Doudareva. “It is a continuum. You begin with RPA, integrate machine learning where needed, and only bring in Agentic AI for the cases that require judgment and adaptability.”
Heintzman likened it to having the right tools for the job: “Don’t use a jackhammer to hang a picture. Likewise, don’t apply agentic AI to spell-check a document.”
Oversight
One of the strongest takeaways for QA and engineering professionals was that governance must be baked into every deployment.
As AI agents make increasingly autonomous decisions, teams must establish clear accountability, validation protocols, and ethical safeguards.
“There is no agentic AI without responsible AI,” said Dr. Nathan. She described implementing transparent “glass-box” models in high-risk environments and building feedback loops with business stakeholders to audit decisions in real time. Without this level of oversight, even well-meaning automation can go off the rails.
For software testers, this means embracing their evolving role not just as bug-catchers, but as safety engineers, ensuring AI systems behave reliably, fairly, and in alignment with business goals.
Looking ahead, the panelists forecast more integration of Agentic AI into core business functions, more regulatory scrutiny, and the rise of “digital twins”, virtual agents that mimic human preferences and workflows. This has profound implications for testability and risk management.
Heintzman described a vision of ecosystem-wide automation where systems collaborate across company boundaries, sharing data, reasoning, and decisions on a secure infrastructure.
For QA leaders, that means preparing for multi-agent systems that must be tested not just in isolation, but across interconnected workflows.
The final message from the panel was clear: Agentic AI is a powerful tool, but it must be deployed strategically.
QA and automation leaders should identify real pain points, not hypothetical ones, assess their automation maturity honestly, build strong governance and testing protocols from the start, and choose Agentic AI only where it adds unique value,
“Start small, start smart,” advised Shimmerman. “And never forget, if you can’t validate it, you probably shouldn’t automate it.”
NEW EVENT

Why not become a QA Financial subscriber?
It’s entirely FREE
* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGULATION & COMPLIANCE
Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.
READ MORE
- Inside JPMorgan’s $18bn QA push with OmniAI reshaping testing
- As AI takes hold, insurance firms face a new testing mandate
- K2view’s Amitai Richman calls out the ‘real bottleneck’ in healthcare and insurance
- AI in QA: how flexible testing is redefining assurance for financial firms
- Explainer: Why site reliability engineering is gaining momentum in banking
WATCH NOW

