QA Financial Forum Chicago | 9 April 2024 | BOOK TICKETS
Search
Close this search box.

Can testing cope with AI and the regulation Tsunami?

mrasking3-highres-1699972229

Sixsentix is a software testing specialist based in Switzerland, with a focus on banking and financial services consulting work. QA Financial spoke to Sixsentix’s managing director for Germany, Matthias Rasking [pictured] about the quality engineering challenges those customers are facing as AI and regulatory-driven change accelerates, and increases the complexity of app delivery.

Q. What is your background? 

I’m based in Frankfurt and I have been in the testing industry for 24 years. I started off as a manual tester before joining Accenture where I focused more on test process improvement. That’s when I first got involved with the TMMi which is a voluntary organisation developing an open framework for improving software testing practices

.Then four years ago I became managing director of Sixsentix Germany, which is a specialised testing consultancy focussed on test automation.

 

Q. And how would you describe the market position of Sixsentix?

Sixsentix was founded by former employees in the banking and testing sectors, and now roughly 70% of our revenue comes from financial services firms. We’re primarily focussed on the German-speaking markets with capabilities in Eastern Europe, Switzerland and Austria.

We’re known for both our independent test consulting services and our test automation services.

 

Q. What is the ownership structure of Sixsentix?

Sixsentix is a privately owned company, which was founded 10 years ago. The founder, Filip Milikic, is still the majority shareholder. Sixsentix Germany is a wholly-owned subsidiary of this main holding group.

 

Q. What are the key challenges financial firms face in accelerating automation, especially in relation to systems that integrate AI?

We see two major challenges facing our clients. The first is as old as test automation itself: where do you get the test data from in order to properly and independently use test automation? We consider this to be a challenge which will only increase in difficulty due to regulatory pressure because now we not only have to consider GDPR, but other financial regulators are looking more closely at what kind of test data you use, from both a data privacy and data security perspective.

The second is especially relevant to AI-integrated systems. It’s very difficult to be deterministic and say “this is my expected result” when designing testing automation for these systems. We therefore create a lot of regression tests which focus on integration points. However, there’s still a lot of validation effort necessary because you can’t be 100% sure of the expected results of an automation script.

So again, at the end of the day there’s nothing really new in terms of the challenges AI in test automation presents.

 

Q. Can you describe how you are working with one of your major financial customers?

We work with one of the largest investment funds in Germany, and we’ve been looking at all of their trading data and how we generate our test data from that. We use various machine learning algorithms to select trades of particular interest, and to examine the data to determine the type of test data that is important for each type of software release. If you have a minor release, you might look at different types of test data compared to when you are considering a major release. 

We have been able to reduce their costs by 50% compared to their previous testing team. This was achieved by automating a lot of these data discovery tasks using AI, and then combining that with automation.

We were also able to cut the time for a major release from four months to two weeks simply by preparing a lot of data up front and removing the bottleneck of asking the business which test cases they consider to be important for this latest release, because we’ve already seen what is important. 

We’re now rolling this model out to other financial services clients. It’s all about understanding the features, understanding functionality, understanding the test data and then mapping that to testing automation, which we believe to be especially relevant for financial firms that tend to have a lot of common off the shelf applications which are all somehow connected. We’re then able to use our collected data to look at the entire process from an end-to-end perspective whereas previously we needed to go to multiple departments to get that sort of data.

 

Q. The EU’s Digital Operational Resilience Act (DORA) comes into force in 2025. Is that a growing source of business for you? 

Testing providers like Sixsentix can definitely help financial firms prepare for DORA. It comes naturally to testers to examine regulatory requirements, for example we’re talking about performance security, maintainability, time to recover and all of those non-functional tests that we’re used to that never received a lot of attention.

For example, at one large state-owned German bank we identified the most critical systems and examined the resiliency of each. We reviewed resiliency in terms of the ability of the bank to hot-fix issues in a short period of time. They only had one acceptable test environment, and simply couldn’t deploy quickly enough. We identified the three connecting systems required to ensure the necessary hot-fix ability. 

DORA will bring a lot of specific challenges because all of these non-functional areas have not really been thoroughly tested previously. Fortunately, financial firms still have a little bit of time from a regulatory perspective but it’s a topic that they need to start to address right now. 

 

Q. And what about the impact of upcoming regulations around AI?

I think this is a matter of trust between the financial firms and their customers, and the trust between Sixsentix and our clients. Trust requires you to be transparent, and so I think that an AI act will actually be helpful. 

I hope AI regulations will make the risks associated with AI a lot clearer to people. If you rely on AI it’s similar to relying on human intelligence: there will be bias and it will rely on past experience. Users just need to know about and be able to spot that.

I don’t think that the way we use AI in our services will be impacted very much. We already explain to our clients how we use AI for some of our services.

 

Q. In 2020 you gave a presentation at a QA Financial Forum on the challenge of testing for bias in chatbots. Is that still a key area of your business? What are your current thoughts on this challenge? 

I think that has gone away a little bit, there are now specialist firms who provide chatbot testing capabilities. It’s a small niche market in itself so we basically refer to a specialist company for that kind of work.

We have also seen that a lot of financial firms have matured in this respect. They now realise that their chatbot is face-to-face with the customer, so they need to test thoroughly. Chatbots are no longer simply a marketing gag with an avatar, it’s really something that you want to use as a communication channel so we’ve seen that it has matured relatively quickly.

 

Q. You are also the Technical Chair at the TMMi foundation, could you tell us about that? Does that relate to your role at Sixsentix at all?

I’m responsible for looking at the reference model itself. The model looks at process maturity and what you should do in order to move away from defect detection and towards defect prevention. How you mitigate the risks instead of simply testing for them. It’s a purely voluntary role, and the project is completely open source. Research suggests that it is the leading test process improvement framework and is free for anyone to download at TMMi.org.

My work with the TMMi foundation helps me keep up to date with academic research and current trends in software testing, which we can use in our work at Sixsentix. The TMMi model itself is something that we use at Sixsentix quite often as it allows us to work with clients from a broad range of industries.