Securiti adds distributed LLM firewalls to secure genAI applications

To address the emerging threats around generative artificial intelligence (gen AI) systems and applications, cybersecurity provider Securiti has launched a firewall offering for large language models (LLMs), Securiti LLM Firewalls.

Future applications are going to be more conversational and hence need to undergo a layer of in-line checks to detect attempts at external attacks, according to the company.

“The conversational nature of genAI has opened the door for brand new types of threats and attack vectors and Securiti LLM Firewalls are designed to protect against it,” said Securiti CEO Rehan Jalil. “Internal or public facing prompts interfaces are a new pathway to enterprise data.”

Securiti isn’t the first to identify this nascent risk to enterprise genAI applications. In March, Cloudflare announced similar features through a new web application firewall (WAF) offering, Firewall for AI.

“Securiti LLM Firewalls inherently know the context of what they are protecting,” Jalil added. “To protect a genAI system, the context of the enterprise data and use case for which the genAI system is being designed for can help inspect the prompts for relevancy, topics and jailbreak attempts.”  

Distributed firewalls for varied genAI threats

Securiti’s distributed LLM firewall is designed to be deployed at various stages of a genAI application workflow such as user prompts, LLM responses, and retrievals from vector databases, and can detect and stop a variety of LLM based attacks in-line and in real time, the company said, including prompt injection, insecure output handling, sensitive data disclosure, and training data poisoning.

Prompt injections, the most common form of LLM attacks, involve bypassing filters or manipulating the LLM to make it ignore previous instructions and to perform unintended actions, while training data poisoning involves manipulation of LLM training data to introduce vulnerabilities, backdoors and biases.

“The firewall monitors user prompts to pre-emptively identify and mitigate potential malicious use,” Jalil said. “At times, users can try to maliciously override LLM behavior and the firewall blocks such attempts. It also redacts sensitive data, if any, from the prompts, making sure that LLM models do not access any protected information.”

Additionally, the offering deploys a firewall that monitors and controls the data retrieved during the retrieval augmented generation (RAG) process, which references an authoritative knowledge base outside of the model’s training data sources, to check the retrieved data for data poisoning or indirect prompt injection, Jalil added.    

Although it’s still early days for genAI applications, said John Grady, principal analyst for Enterprise Strategy Group (ESG), “These threats are significant. We’ve seen some early examples of how genAI apps can inadvertently provide sensitive information. It’s all about the data, and as long as there’s valuable information behind the app, attackers will look to exploit it. I think we’re at the point where, as the number of genAI-powered applications in use begins to rise and gaps exist on the security side, we’ll begin to see more of these types of successful attacks in the wild.”

This offering, and those like it, fills a significant gap and will become more important as genAI usage expands, Grady added.

Enabling AI compliance
Securiti LLM Firewalls are also aimed at helping enterprises meet compliance goals, whether legislative (such as the EU AI Act) or internally mandated policies (for example, following the NIST AI Risk Management framework, AI RMF).

Organizations working to Gartner’s AI Trust, Risk, and Security Management (TRiSM) framework will also be able to use the firewalls for key components, Securiti said.

Securiti expects the firewall offering, combined with existing capabilities in its Data Command Center, to cover all aspects of OWASP’s list of the 10 most critical large language model vulnerabilities, extending protection from additional LLM threats such as jailbreaks, authentication phishing, and use of offensive and abusive language.

The Securiti LLM Firewalls are available now as part of the company’s overall “AI security and governance” solution announced by the company earlier this year.

Generative AI