Swiss AI startup raises millions to penetrate finserv space

Julian Riebartsch, CEO and founder of Calvin Risk

The relatively young artificial intelligence startup Calvin Risk, which helps banks, financial services firms and other finance entities with governance and compliance monitoring through testing, implementing and embedding AI-powered software, has secured $4 million in fresh capital.

The investment round was led by Join Capital and seed + speed Ventures, bringing total funding to just over $5 million since the company’s inception in 2022.

The capital injection comes only shortly after the company signed up Aviva as a new client, a major European insurer, which has started integrating Calvin Risk’s risk assessment tools and AI testing platforms to test and monitor its internal software infrastructure.

Another major client, which hired Calvin Risk earlier this year, is UK-based Lloyds Banking Group.

‘Black boxes’

Calvin Risk, which is based in Zurich, Switzerland, said investors have given the startup a vote of confidence as artificial intelligence rapidly proliferates across enterprises, but “rushed mediocre implementations create a series of operational risks, such as bias, opaque decision-making processes, and unpredictable real-world behaviour.”

The Swiss firm is therefore on a mission to help companies deploy AI safely and manage these risks through automated testing and quantitative risk assessment, explained Julian Riebartsch, CEO and Founder of Calvin Risk.

“AI models often function as ‘black boxes’,” Riebartsch continued, explaining that this “makes it difficult to understand how decisions are made or whether underlying biases are influencing outcomes.”

For financial services firms, “this opacity,” as Riebartsch puts it, can lead to unintended consequences that threaten both their operations and reputations, especially as generative AI systems like ChatGPT increasingly power customer interactions.

“To make the stakes higher, the upcoming EU AI Act introduces strict requirements for AI systems, mandating that companies assess and document the risks of their AI models with severe penalties for non-compliance,” he added.

Despite this, many organisations still rely on post-incident analysis or lack structured frameworks to address the safety of their AI.

“We try to bridge that gap with a platform that uses adaptive assessments and continuous monitoring to provide a real-time overview of a company’s entire AI portfolio, predicting potential risks, qualitatively and quantitatively, and their associated value-at-risk,” Riebartsch noted.

He added that “with AI systems becoming central to operations, proper corporate governance must now include explicit AI risk management at the Board level.”

Spin-off

Founded in 2022 as a spin-off from ETH Zurich, generally considered one of Europe’s largest AI research institutions, Calvin Risk combines academic expertise with practical industry experience.

The company’s product provides a modular framework for proactive AI governance with two core capabilities: governance digitization for internal directives and risk evaluations, and automated AI testing between development and deployment.

“Unlike traditional platforms focused solely on documentation or post-deployment fixes, we offer real-time insights through preventive, pre-deployment assessments, setting a new standard for responsible AI adoption,” Riebartsch claimed.

The past 12 months have been transformational for Calvin Risk, he continued, with a range of milestones in product development, customer adoption, and revenue growth.

Looking ahead, Calvin Risk plans to expand its platform’s capabilities to support the business & analytics team, Riebartsch said.

Investors perspective

Alexander Kölpin

Explaining his firm’s reason to invest in Calvin Risk, Alexander Kölpin, investor and managing partner at seed + speed Ventures, said that “as an investor, I see enormous potential in Calvin Risk to help shape the emerging market for AI risk management.”

Kölpin called the combination of technological excellence from academia and in-depth industry expertise makes “a real help for all those companies that use and build relevant AI-based business processes.”

Another investor, Tobias Schirmer, the founding partner of Join Capital, said in agreement that Calvin Risk’s “approach, combined with its team and vision, positions them to become a vital resource for organisations navigating the complexities of AI compliance and risk management.”

In fact, Schirmer believes Calvin Risk will play “a pivotal role in shaping the future of AI governance”.

EU’s new AI Act

The capital injection comes as the EU is increasingly stepping up its regulatory oversight and supervision. What has been hailed as the world’s first comprehensive regulatory framework for artificial intelligence became law in August.

The above-mentioned new AI Act entered into force after more than three years of legislative debate, originating from a European Commission proposal aimed at fostering the development and uptake of AI while ensuring fundamental rights across the EU and a human-centric approach to AI, the final text totals a remarkable 50,000 words text, divided into 180 recitals, 113 Articles and 13 annexes.

The framework was described by Elisabetta Righini, a Brussels-based partner at law firm Latham & Watkins as somewhat of a “holistic set of risk-based rules applicable to all players in the AI ecosystem, from developers, to exporters to deployers.”

With its broad reach and extensive remit, and like other EU legislative efforts before it, such as the GDPR, Righini said the AI Act will have “an impact beyond the EU’s digital borders and to shape the future of how this fast-growing technology” that is rapidly embraced by many banks, insurance firms and other financial services players.

London-based Daryl Elfield, a partner at KPMG, specialised in IT, quality engineering and software testing services, largely agreed with Righini’s observations.

Daryl Elfield
Daryl Elfield

When asked how the EU’s AI Act will impact testing standards, Elfield explains the EU AI Act is a significant piece of legislation that aims to regulate the development and use of AI systems within the European Union.

“It follows a risk-based approach, classifying AI systems based on their risk level and imposing corresponding requirements,” he noted.

“For software testing standards, this means that AI systems deemed high-risk will need to comply with strict standards concerning risk management, data quality, transparency, human oversight, and robustness,” Elfield pointed out.

“Providers of AI systems, including those that develop AI for internal use, will be affected and will need to ensure compliance with these standards,” he continued.

“While this risk-based approach to regulatory oversight is welcome, further secondary legislation may be required to give providers sufficient information to ensure they can comply with these standards,” Elfield noted.

AI Apps

As many within the industry are trying to make sense of the new regulatory framework, Elfield stressed that the EU AI Act emphasises the importance of testing and monitoring the safety of AI applications, particularly those classified as high-risk, as mentioned earlier.

“If an application falls under this category, then regular testing must take place to ensure accuracy, reliability, and security,” he explained, adding that monitoring systems are mandated so that any issues with the system and breaches be identified and remediated as quickly as possible.

Paul Mowat

In addition, there is a requirement for human oversight not just during the design and testing process, but also over the use of the output of high-risk AI applications, which creates new opportunities for software testers and could finally start to address the AI “black box problem”, Elfield said.

The ‘black box problem’ is the difficulty in deciphering the reasoning behind an AI system’s predictions or decisions.

“How developers solve this problem will without doubt be challenging,” he added.

When it comes to testing and monitoring AI-powered applications, the key expectation which stands out for Mowat is accountability, especially given all the recent IT system issues, he said.

“Accountability is needed to ensure the AI applications have been thoroughly and corrected tested.”

Mowat stressed this involves strict conformance to the AI requirements to comprehensive test the AI applications across the usual functional, performance, security and stress testing phases but also validation of the AI algorithms which is more challenging, methods will need to be applied to test for bias, fairness, teams re-trained to run audits and compliance checks and on-going monitoring to capture feedback and check the applications continue to meet the regulatory standards.

Role of financial services

With regards to banks and financial services firms, Elfield calls “trustworthy AI paramount for both regulatory compliance and public acceptance.”

Therefore, he suggests banks should establish a comprehensive governance framework that ensures their AI systems are reliable, unbiased, and explainable.

“This includes implementing appropriate quality management systems, human oversight and accountability mechanisms. By prioritising ethical and responsible AI development, banks can foster trust and maintain a positive reputation,” Elfield noted.

In addition, Mowat thinks banks and other financial service firms will need to explain to the regulators the risks associated with the AI applications, how were the risks identified, assessed and managed.

“This sounds an easy task, however this is a broad area with the need to collaborate with legal team to ensure legal obligations are met, the firms have an understanding on their suppliers and outsourced service providers, as well as knowing how their data is managed, what robust product governance frameworks are in place and there is detailed documentation at each step to capture system controls and decisions made to share with regulators,” he said.

Engagement

Finally, the insiders agreed proactive engagement with regulators and supervisors is crucial for navigating the evolving AI landscape.

“Banks should openly discuss their AI implementation plans and seek guidance on regulatory expectations,” Elfield stated.

“This collaborative approach ensures that banks are compliant with the latest regulations while also informing policymakers about the practical implications of AI in the financial sector,” he noted.

“By working together, banks and regulators can foster a supportive environment for responsible AI innovation.”


NEXT WEEK


QA FINANCIAL FORUM LONDON: RECAP

Last month, on September 11, QA Financial held the London conference of the QA Financial Forum, a global series of conference and networking meetings for software risk managers.

The agenda was designed to meet the needs of software testers working for banks and other financial firms working in regulated, complex markets.

Please check our special post-conference flipbook by clicking here.


READ MORE


Become a QA Financial subscriber – for FREE

* Receive our weekly newsletter * Priority invitations to our Forum events

REGISTER HERE TODAY