eSentire introduces LLM Gateway to help businesses secure generative AI

Managed detection and response (MDR) vendor eSentire has announced the availability of LLM Gateway, an open-source framework to help security teams improve their governance and monitoring of generative AI and large language models (LLMs). Originally developed for internal purposes, the gateway prototype is now freely accessible on GitHub. It is the first project from eSentire Labs and aims to enable businesses to scale their use of generative AI tools as securely as possible, the firm said.

The launch comes as security and IT teams are increasingly tasked with ensuring that their organization’s critical data does not get exposed while their employees use generative AI LLMs such as ChatGPT. As such, there is growing need for security leaders to implement cybersecurity policies that not only embrace and support business adoption of generative AI but effectively address risks without stifling innovation.

Companies rush to adopt generative AI without internal security controls

“Companies are rushing to inject LLMs into everything, typically without any or with minimal internal security controls,” Alexander Feick, VP of eSentire Labs, tells CSO. “Given the high value that LLMs can create, there is business demand to move forward even under high risk.” One of the best uses of LLMs is to summarize information for the user. However, the use of LLMs can also exacerbate the chances that sensitive data can be unintentionally exposed, he adds. “Additionally, the LLM arena and the threat surface is still poorly understood, so defenders struggle to know what to prepare for.”

Conceptually, an LLM gateway is a place to centralize all interactions with LLM models, Feick says. LLM gateways follow and enable the principles of security by design by creating the ability to inject appropriate security controls across all LLM-based functions at every trust boundary possible in each LLM interaction, regardless of where it sits in the application flow, he adds.

LLM gateways help ensure that whatever data is being fed into and output from the LLM tool is free of proprietary company data. “Once all your interactions are running via the gateway, you achieve monitoring, but you also create a central point to apply security controls. By deploying a gateway, every time data passes into or out of an LLM system, the gateway has an opportunity to inspect, modify, or re-route those interactions,” Feick says.

Gateway framework creates protective layer between corporate data and LLM tools

eSentire’s LLM Gateway framework creates a protective layer between corporate data and open AI applications including ChatGPT, according to the firm. It allows users to log different types of LLM interactions occurring in the gateway for security purposes and provides basic recommendations on how to visualize and track LLM usage within eSentire’s initial plug-ins. It also provides an option for security practitioners and IT teams to apply their own controls such as corporate policies, usage rules, security protocols, and prompts. It should be considered a simplified, practical example of how to use a gateway to secure, log, and create management reports on interactions with ChatGPT and other LLMs or applications, as part of a journey towards building or purchasing a more mature solution, according to Feick.

“Without an LLM gateway, every single application that uses LLMs must be assessed individually, and the log data for each may be different,” Feick says. This leads to exponentially more work and makes teams slower to react to new cyber threats. “Implementing the gateway is primarily a forward-facing architecture concept that enables security defenders to position themselves so they can react quickly to emerging security threats and compliance requirements that are not yet fully understood.”

Generative AI, IT Governance Frameworks, Security Monitoring Software