Amdocs, the Nasdaq-quoted software business has identified financial services as a key area for growth, especially amongst larger banks that have similar legacy IT and integration challenges to those that have also faced its telecoms customers; a core client base for Amdocs. QA Financial spoke to Dror Avrilingi [pictured], VP and head of Amdocs Quality Engineering, International and Strategy, about that strategic shift and about BrAIn, his team’s AI roadmap for banks.
Q: Dror, tell us about your background in quality engineering and at Amdocs.
A: I started my Amdocs journey 23 years ago as a tester, so I’m probably one of those few people who have gone all the way from that starting point to becoming a VP in a company such as Amdocs.
I’ve had multiple roles in the testing space with Amdocs, including leading a key North American tier one project to deliver Agile and Waterfall projects.
I spent three years in Melbourne, Australia, leading our business in Asia Pacific before returning to my homeland of Israel where I became the CTO of Amdocs Quality Engineering; responsible for driving the technological change and the transition to DevOps. During this time, I developed five patents, and realised my passion for exploring and implementing cutting edge technologies and disruptions that are shaping the future of NextGen Quality Engineering.
For the last two and a half years I have been running all of our business internationally for quality engineering, while still helping to drive our innovation journey as our global Head of Strategy. It’s extremely rewarding to be able to combine these roles by integrating our technology into how we sell, and how we deliver projects, as well as by developing the practices to ensure that our organisation will be able to adopt new technologies in a matter of days and weeks, not months and years, as the pace of disruption accelerates.
Q: How does the quality engineering team sit within the broader software business at Amdocs?
A: Amdocs Quality Engineering is an independent business unit within Amdocs Global Services.
While we support the delivery of Amdocs solutions, over 60% of our work is providing testing services on top of non-Amdocs products. We can test products, or anything else, so we are not locked into Amdocs product offerings. I think that’s quite a unique structure within a company such as ours.
Q: Does working outside the Amdocs product range pose particular challenges?
A: I think we have an advantage because there is a huge development factory within Amdocs, so we know how to adjust ourselves to work in different modes of operation: waterfall, DevOps or a hybrid approach.
Amdocs Quality Engineering cannot stand still because we need to keep pace with the main company, which requires the introduction of advanced technologies and advanced modes of operation. And so we can therefore apply the same disciplines when we deliver a non-Amdocs related testing service.
Q: Tell us about your work with banks and other financial firms.
A: We’ve been working with multiple banks worldwide, to which we bring a unique approach.
Typically, we begin with an assessment aimed at uncovering the unknowns. We have an approach called Q-MATE that enables us to go to an organisation where we aren’t the incumbent tech provider and assess the state of their quality engineering operations. Through the process of the assessment, we help to make some of the items unblind, the unknowns known, and then we begin gradually modernising the activities.
We focus on a “Centre for Enablement” approach which positions quality engineering teams as a centre for enabling the adoption of technologies and practices elsewhere in the business. At the end of the day, if I’m boosting automation capabilities, I want everybody to use them – not only testers but also developers.
Typically, large banks have in-house testing teams or vendors that are running huge numbers of test cases that are not necessarily highly automated. The business models are not incentivized to show efficiencies.
We structure our approach in a way that takes the customer from ideation to production. We typically interview dozens of people at all levels of the quality organisation, analyse the results, and then say to them: “Here are your blind spots. Here is how you need to align yourself.”
Banks that want to go digital have to change the way they are working. An example of this is one of the banks that we are working with in Asia Pacific. It’s a very large bank that has moved to value stream management but the only team that didn’t migrate was their testing team. So that’s a threat to the entire organisation because if you don’t move to a “centre for enablement” then it’s not a sustainable model.
Q: What are the major challenges you’ve encountered at banks when moving to these new approaches to testing?
A: Banks want to move very fast to different architectures and at the same time they operate in a highly regulated environment. You have to meet the regulations, and you also need to standardise or consolidate tools.
The CXOs at banks want hyperscale automation. Everybody wants to do machine learning; everybody wants to do GenAI. But not too many people understand that you have to go through a journey. That journey starts with robust automation, then you can apply AI-driven automation to reduce the cost of maintenance and creating automated test scripts. Then you can move to predictive AI which allows you to make your automation more accurate, and then finally you move to GenAI. You can’t jump from automation to GenAI or from automation to predictive AI without having the correct infrastructure in place.
I feel that the experience that we have had working in telecommunications, which we are now applying to banks, allows us to know what the journey looks like.
We know what automation at scale looks like; we can show what AI driven automation looks like; we can show how we use six different machine learning models to do defect prediction and then we can start playing large language models for foundation models to start generating some of those automatically.
Q: Tell us a little bit about BrAIn?
A: Over the last two years we have created a road map called BrAIn which consists of five stages which illustrate the journey from automation to predictive AI to generative AI. The map shows where we start to collect all the information: traceability links, tagging labels etc. so we can actually apply machine learning and predictive AI and then take it to the next level and continue to connect more things.
Q: What timeframe is required for this transformation? And how do you charge?
A: Some customers are more mature in terms of the data that they have readily available, so in these cases we can apply predictive AI faster. But for any customer today we have a very solid road map and we know how to show value and improvement quarter by quarter. When we say improvement we mean an increase in velocity or the introduction of new models and features. Over time that translates to a cost reduction, a return on investment or an increase in throughput and productivity – whatever benchmark is required.
Our work for a customer used to be priced on the basis of a certain number of test cases or days, but now we are working on capacity modelling which is an outcome-based model. This is focused on how many features we can deliver in a year, not how many test cases. It’s very important for us to make the connection between the features and business agility.
Q: What are the major challenges banks will face in using AI for software development?
A: I think the main issues that we currently see relate to security and data privacy.
Banks often prefer to work with open source, Large Language Models when they work with AI. But they need to install these in a controlled environment so that the data is locked and there is no security risk. The disadvantage is that you need a tremendous amount of computing power and you might not be able to update the model quickly. Being able to secure data in an effective manner when working with LLMs is something that really needs to be addressed.
I also think GenAI is not there to replace people, it’s there to augment tasks. When people say that we can replace the testers, I don’t think you are going to be able to do so. Prompt engineering is key to improving the accuracy of the generated answers you’re going to get from an LLM, which requires skilled people. So I think that’s also a mindset challenge there.