AstraZeneca’s Vishali Khiroya on AI governance and agentic testing

UK-based Vishali Khiroya

At this week’s QA Financial Healthcare & Insurance Forum London, one of the speakers at the event in the British capital, AstraZeneca’s Director of Testing Strategy, Development and Automation, Vishali Khiroya, offered a detailed look at how one of the world’s largest pharmaceutical firms is redefining its testing strategy for an AI-driven future.

In conversation with QA Financial’s Michiel Willems, she explains the development of AstraZeneca’s internal ChatGPT-style tool, the regulatory lens her teams apply to AI, and why she believes the role of testers will fundamentally change by 2030.

Building an internal ChatGPT for regulated testing

Discussing AstraZeneca’s decision to develop its own large-language model, Khiroya says the initiative aligns directly with the company’s long-term digital roadmap.

“I think a lot of this is in line with our bold ambition to kind of drive AstraZeneca’s kind of IT 2030 strategy, which is a lot of what we talk about currently.”

She describes an organisation determined to move at the speed of technological change. “We all know we’re living in a time where technology is driven by AI. We’re going at pace with this.”

Security considerations were central. Public AI tools were off the table. “From a security perspective, OpenAI poses risks to us as an organiation in terms of using our own data within it,” she explains. The answer was to create a private model. AstraZeneca’s teams “developed their own AstraZeneca’s own version of ChatGPT… within our AstraZeneca kind of ecosystem.”


“Whatever tech you might be talking about, it is the testing of those that we internally do as well.”

– Vishali Khiroya

For Khiroya’s testing organisation, this created new potential for transformation. “It opens up a whole world of how can we use it within our testing processes.” Her work focuses on embedding the tool into live operations. “I’m part of the team that have consumed it and how I’ve now implemented it within our teams.”

She is clear about the governance hurdles the builders faced. “I’m assuming the challenges there would be all sorts of security… governance… ethics… accuracy… hallucination… all those kind of normal AI perspectives that we have that have to be considered within the organization for it to scale across our enterprise.”

Extending regulated frameworks to AI

In a sector defined by compliance, Khiroya emphasises continuity as much as change. “We adapt and we follow pretty much the same regulatory principles. So we have SOPs, our teams follow a certain process when it comes to regulatory testing.”

For her, mature testing principles already map well onto AI. “To be honest, it’s not that different to when, if you were a good tester, you would follow some of these processes anyway.”

She makes a distinction between AI used within the testing process and the testing of AI itself. “AI can be seen in two lights within testing. One, adopting AI within your testing process and two, the testing of actual AI.” AstraZeneca increasingly does both.

“It could be a Gentic. It could be digital twins. Whatever tech you might be talking about, it is the testing of those that we internally do as well.”

This has led to new structures and guidelines. “We’ve had new SOPs written from an AI governance perspective in collaboration with those teams. And that helps us make sure that we keep within the remit of what we need to from a regulatory perspective.”


WATCH OUR PODCAST WITH VISHALI KHIROYA HERE


On data governance, Khiroya stresses that AstraZeneca’s discipline has not changed with the arrival of AI. “Majority of all the data we use is test data.” Regulatory data is strictly off limits. “We don’t use any personal or regulatory related data from any of our systems.”

Instead, the company relies on long-established methods. “We follow normal testing practices and standards from an obfuscation data masking perspective… nothing revolutionary that I’m saying. You just continue to adapt them.”

Measuring productivity, efficiency and value

As AstraZeneca expands both automation and AI-driven testing, tracking return on investment has become increasingly important. Khiroya says the organisation has introduced frameworks to measure this systematically. “We have an internal tracker, first of all.”

She is keen to make clear that this is not surveillance. “It might sound a little bit big brothery, but it’s not. It’s literally a tracker that our teams use to input productivity.”

For automation, the focus is clarity and comparability. “It’s the number of hours you save when you use automation compared to your manual testing.” KPIs include “number of hours saved,” “the number of test scripts you’ve created,” and “what’s the equivalent FTE saving that you would get.”

AstraZeneca maintains this visibility globally. “We’ve got a test automation dashboard, which gives us that level of metrics across our X number of projects.”


“Twenty years ago, I was a hands-on test engineer… I think in 10 years, that role will be very, very different.”

– Vishali Khiroya

Measurement for AI is still emerging. “We are not as mature in measuring that as we are with our automation.” Even so, gains are becoming evident. “We are doing things like velocity checking… increase in velocity because we are using AI in test case generation.”

The company is also tracking workforce impact. “We have repurposed a number of engineers in a new project instead of onboarding those new members of the team.”

Across the industry, she notes, no one has definitive numbers. “Nobody can strictly say that I’m going to get 60% productivity from this. We’re all still trying and learning in this AI journey.”

2030 and the rise of agentic testing

Looking ahead to AstraZeneca’s 2030 goals, Khiroya believes testing roles will undergo profound change. “I think that testing as I think engineering in itself and testing being one of those key roles within engineering, I think will transform.”

She reflects on her own early career. “Twenty years ago, I was a hands-on test engineer… I think in 10 years, that role will be very, very different.”

Agentic testing is central to her forecast. “Our ambition is to start using AI, Agentic to orchestrate testing.” This will eventually produce a new generation of testing professionals. “We will become native agentic testers rather than what we call functional and automation.”

Yet one principle remains unchanged. “Always having that human in the loop… I think that will never change.”

For Khiroya, the future of quality engineering is defined by intelligent orchestration. She sees the biggest shift coming from “how you can embed AI across the whole testing process,” which she believes will reshape “those personas… in a quality engineering perspective.


Why not become a QA Financial subscriber?

It’s entirely FREE

* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGISTER HERE TODAY




REGULATION & COMPLIANCE

Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.


READ MORE


WATCH NOW