Banks warned not to embed too much GenAI in digital systems

Banks and other financial firms are rolling out AI solutions at unprecedented speed
Banks and other financial firms are rolling out AI solutions at unprecedented speed

Generative AI (GenAI) is rapidly transforming the banking and financial services industry, offering unprecedented opportunities for automation, efficiency, and innovation.

However, its integration into software testing processes in the finance sector presents significant risks that must be carefully managed, according to several industry insiders.

Thomson Reuters recently issued a warning when it wrote that “financial services firms face some of the biggest challenges with AI adoption, especially around assessing risks and minimising them.”

Firstly, banks handle highly sensitive customer data, making privacy and security paramount. Generative AI systems often require extensive datasets for training, which can expose confidential information to unauthorized access or breaches.

Todd Phillips

“Generative AI agents deployed by financial institutions put customer money and business operations at risk,” warns Todd Phillips, a fellow at the Roosevelt Institute in the U.S.

He stressed that “AI agents can ‘hallucinate’ false or misleading information, provide poor financial advice, or otherwise break down.”

Phillips added that “the susceptibility of GenAI models to adversarial attacks further compounds these risks, as malicious actors can manipulate inputs to deceive systems.”

Generative AI models are vulnerable to bias due to the nature of their training data. This can result in discriminatory outcomes in areas like loan approvals or fraud detection.

James Craggs, a transformation partner and director at LinkedIn, emphasized that “AI systems trained on historical data may perpetuate existing biases or create new ones through proxies like zip codes or educational background. Such biases can lead to lawsuits, brand damage, and financial penalties.”

Regulation

Another major issue is regulatory compliance challenges. The financial sector operates under stringent regulatory frameworks. Implementing GenAI in software testing requires strict adherence to these regulations to avoid legal repercussions.

ImJames Craggs

“The biggest challenge for implementing Generative AI in banking lies in navigating the financial regulations,” a recent KPMG report warned.

“Generative AI introduces new complexities related to data privacy, security, and adherence to regulatory standards. Non-compliance could lead to fines and reputational damage,” the authors wrote.

What makes compliance even more complicated is a ‘lack of explainability’, as the KPMG report stated it, explaining that the “black-box” nature of GenAI systems poses challenges for transparency and accountability.


“In software testing for critical banking systems, ‘hallucinations’ can have catastrophic consequences.”

– Todd Phillips

Banking executives remain cautious about deploying GenAI in high-risk areas due to its lack of explainability. As KPMG noted: “True: you can ask genAI to justify its decision, but those justifications are also based on a generative model, creating an eternal loop of inexplicability. This opacity undermines trust among regulators and customers alike.’

Moreover, there is a risk of hallucinations: Generative AI systems are prone to producing ‘hallucinations,’ where they generate confident but incorrect outputs.

In software testing for critical banking systems, such errors could have catastrophic consequences, Phillips warned.

“AI agents may inhibit access to bank accounts or fail to execute transactions properly… If disrupted by an AI agent, there is almost no opportunity for human recourse that can undo all the damage,” he said.

Operational risks

Integrating GenAI into legacy banking systems is fraught with complexity. Financial institutions often struggle with outdated infrastructure and insufficient data readiness.

Rahul Komar

Additionally, employees require training to work effectively alongside AI technologies.

In addition, the ethical implications of using GenAI extend beyond technical risks. Financial institutions must ensure that automated decisions align with societal values and do not erode customer trust.

“A bank’s ability to prevent and remove bias in its AI/ML models can go a long way toward determining how well it will succeed serving clients and as an organisation,” commented Rahul Kumar from Talkdesk.

He said that, while Generative AI offers transformative potential for software testing in banks and financial services firms, its adoption must be approached cautiously due to significant risks related to data privacy, compliance, bias, transparency, operational challenges, hallucinations, as well as ethics.

By implementing strong safeguards and governance frameworks, financial institutions can harness GenAI responsibly while mitigating its inherent risks.


Why not become a QA Financial subscriber?

It’s entirely FREE

* Receive our weekly newsletter every Wednesday * Get priority invitations to our Forum events *

REGISTER HERE TODAY




REGULATION & COMPLIANCE

Looking for more news on regulations and compliance requirements driving developments in software quality engineering at financial firms? Visit our dedicated Regulation & Compliance page here.


WATCH NOW


READ MORE