QA Financial Forum New York | 15 May 2024 | BOOK TICKETS
Search
Close this search box.

Cloudflare launches firewall tool to protect LLMs at banks’ AI apps

Cloudflare founder and CEO Matthew Prince
Cloudflare founder and CEO Matthew Prince

California-based Cloudflare has launched a new platform that should help banks, insurers and others financial services firms from attempted attacks, abuse, and tampering via its artificial intelligence applications.

The company said there was a clear demand in the market for its new tool as the banking space increasingly relies on AI with applications and other AI-powered features increasingly being integrated into the financial services’ core infrastructure.

The development of Firewall for AI is described by the company as “a new layer of protection that will identify abuse and attacks before they reach and tamper with Large Language Models (LLMs), a type of AI application that interprets human language and other types of complex data.”

‘The only one’

Backed by Cloudflare’s existing infrastructure, which the company claims is “one of the largest in the world,” Firewall for AI will be positioned by Cloudflare as “the only security providers prepared to combat the next wave of attacks in the AI revolution,” namely those targeting the functionality, critical data, and trade secrets held within LLMs, the San Francisco-based company told QA Financial.

Cloudflare pointed at a recent study by Deloitte which found that only one in four executives at banks, insurance firms and other financial services firms believe their organisations are well-prepared to address AI risks.

“When it comes to protecting LLMs, it can be extremely challenging to bake in adequate security systems from the start, as it is near impossible to limit user interactions and these models are not predetermined by design,” said Matthew Prince, co-founder and the current CEO at Cloudflare.

“For example, they may produce a variety of outputs even when given the same input,” he continued.


“LLMs are becoming a defenceless path for threat actors, leaving organizations vulnerable to model tampering, attacks and abuse.”

– Matthew Prince

“When new types of applications emerge, new types of threats follow quickly,” Prince said. “That’s no different for AI-powered applications.

He went on to claim that “we will provide one of the first-ever shields for AI models that will allow businesses to take advantage of the opportunity that the technology unlocks, while ensuring they are protected.”

AI models scrutinised

Explaining the Firewall for AI platform, Prince said security teams will be able to protect their LLM applications from the potential vulnerabilities that can be weaponised against AI models.

It should enable customers to rapidly detect new threats: Firewall for AI may be deployed in front of any LLM running on Cloudflare’s Workers AI.

“By scanning and evaluating prompts submitted by a user, it will better identify attempts to exploit a model and extract data,” Prince explained.

Also, he claims the tool can automatically block threats, with no human intervention needed.

Moreover, “any customer running an LLM on Cloudflare’s Workers AI can be safeguarded by Firewall for AI for free, helping to prevent growing concerns like prompt injection and data leakage,” he concluded.


ALSO READ