How GenAI helps entry-level SOC analysts improve their skills

Security operations centers (SOCs) are using generative AI systems to automate repetitive triage and documentation tasks, allowing entry-level security analysts to spend more time on investigations, crafting responses, and developing core skills. It may not be a magic bullet, but the technology can be another useful weapon in the analyst’s arsenal, increasing accuracy, providing a knowledge base, and gathering information quickly and efficiently.

GenAI writes prompts, explainers so analysts can handle bigger jobs

Secureworks, which provides SOC services and software to customers in addition to running its own SOC, has been using various forms of AI for years. The company used a range of technologies, including anomaly detection and other machine learning models, all the way up to neural networks. These systems helped Secureworks collect and prioritize alerts so that analysts could focus on the most critical ones first. Over the previous 18 months, the company saw an 80% reduction in alerts and a 50% reduction in analyst workload, allowing the analysts to spend more time on more difficult cases and on serving new customers.

“The next area of focus was how to improve the analyst experience from a triage, investigation, and response perspective,” says Kyle Falkenhagen, the company’s chief product officer. It was the perfect time for generative AI to hit the scene.

It can be a very time-consuming process once an analyst performs an investigation, documenting it so it can be handed off to a customer, according to Falkenhagen, which is why the company chose investigation summaries to be the first task GenAI would perform.

Secureworks used data sets from existing investigations to develop pre-written prompts, so that the generative AI could output the first draft of the reports for analysts to review and edit. The functionality was available to the internal SOC first, using OpenAI directly, then updated to work with a private instance running on the Azure cloud. And it was deployed in phases, first for a very limited set of incidents. “We immediately saw a 90% reduction in the time it took for analysts to craft those investigations,” he says. “As we got comfortable and revised the prompts, we expanded it. Now we even have some investigations where it goes directly to the customers.”

The second step was to have the AI write explainers for analysts. “For example, we see a lot of script blocks and command line syntax,” says Falkenhagen. “If a command line executes a script you can end up with code in the telemetry that comes back to us. If you see a bunch of PowerShell code that was executed, being able to understand what that PowerShell is trying to do can be hard because it’s often obfuscated, especially if it’s malicious.” A large language model can easily read code and explain it without the analyst having to spend time trying to decipher it. “It’s working great. We used it internally until we got comfortable with it, and we opened it up to customers in the fourth quarter of 2023.”

The next step was using GenAI to explain alerts. “We have tens of thousands of countermeasures in our systems that our research teams have built,” he says. “We’re taking the detection logic, the description of what the alert is supposed to be and combining that with the events related to the alert and get an explanation of what caused the alert to fire in the first place,” he says.

Using GenAI as an advisor to increase analyst accuracy

The very latest use case, which currently is in very limited rollout, is that of using the GenAI as an advisor. “There’s a specific set of analysts who can open it at any point in the user experience, with the context of the selected customer and all the data on their alerts and with access to our proprietary data sets,” he says. “Then the analysts can interact with it and ask questions about the investigation, such as what the next action should be.”

As part of the staged rollout process for the GenAI features, Secureworks has built feedback loops that allow analysts to rate the results that the AI provides. Then the results go back to the data scientists and prompt engineers, who revise the prompts and the contextual information provided to the AI.

Integrating generative AI revolutionized the way Secureworks’ junior analysts approach security operations, says Radu Leonte, the company’s VP of security operations. Instead of focusing exclusively on repetitive triage tasks, they can now handle comprehensive triage, investigation, and response. They can now triage alerts faster because all the supplementary data is brought into the platform, together with summaries and explanations, Leonte says. The accuracy and quality of triage increases as well because of fewer human comprehension errors and fewer missed detections.

“The analysts now have more time to perform in-depth correlation research or escalate a critical issue even faster,” Leonte tells CSO. “This shift, enabled by AI, not only speeds up their professional development but also allows them to dive deeper into complex security challenges.” Plus, it fosters a richer learning environment.

Secureworks isn’t alone in using generative AI to help entry-level security analysts do higher-level work. “It’s not going to replace those tier one analysts,” says Omdia analyst Curtis Franklin. “Or even reduce the need for tier-one analysts. But in a labor market that is seeing dramatic shortfalls in the number of SOC analysts of all levels, generative AI can help a tier one analyst perform like a tier 1.5 analyst.”

Today, the burden of helping new analysts get up to speed often falls on their more experienced colleagues, says Ben Moseley, a professor at Carnegie Mellon University — and those colleagues are already strapped for time. “Generative AI assistants can more quickly answer those questions,” Moseley says.

Analysts use GenAI to gather information

Forescout Technologies is another company that offers SOC services for enterprise customers and has a SOC for its own operations. But unlike Secureworks, Forescout decided not to allow GenAI to have access to customer data but rather allow analysts to ask ChatGPT for general information.

It’s faster and more convenient than Google, Forescout CTO Justin Foster tells CSO. “And you can ask pretty deep questions that you can’t ask with a general Google search, and it keeps the context so that you can keep going with that conversation.” A junior analyst can immediately see a description of a potential event and action recommendations. “This allows them to become more effective immediately,” Foster says. “The more you can automate the tier one SOC analyst tasks, the more they spend time on tier two and incident response activities.”

Forescout uses Google Vertex under the covers, which supports 130 different large language models. Two models that Forescout uses are Bison and Unicorn. One model was tuned specifically for summarizing threat events, Foster says. Another was better for general threat intelligence and summarizing everything known about that threat.

Running a custom-tuned model in a private instance allows for better security and control. Another way to have guardrails in place is to use APIs instead of letting analysts converse directly with the models. “We chose not to make them interactive, but to control what to ask the model and then provide the answer to the user,” Foster says. “That’s the safe way to do it.”

It’s also more convenient as the system can queue up the answers and have them ready before the analyst even knows they want them and save the user the trouble of cutting and pasting all the required information and coming up with the prompt. Eventually, analysts will be able to ask follow-up questions via an interactive mode, but that isn’t there yet.

In the future, Foster says, security analysts will probably be able to talk to the GenAI, the way Tony Stark talks to Jarvis in the Iron Man movies. In addition, Foster expects that the GenAI will be able to take actions based on its recommendations by the end of this year. “Say, for example, ‘We have 10 routers with default passwords — would you like me to remediate that?’” This level of capability will make risk management even more important.

He doesn’t think security analysts will be eventually phased out. “There’s still a human element in remediation and forensics. But I do think GenAI, combined with data science, will phase out tier-one analysts and maybe even tier-two analysts at some point. That’s both a blessing and a curse. A blessing because we’re short on security analysts worldwide. The curse is that it’s taking over knowledge jobs.” People will just have to adapt, Foster adds. “You won’t be replaced by AI, but you’ll be replaced by someone using AI.”

Analysts use GenAI to write scripts and summaries

Netskope has a global SOC that operates around the clock to monitor its internal assets and respond to security alerts. First, Netskope tried to use ChatGPT to find information on new threats, but soon it learned ChatGPT’s information was out of date.

A more immediate use case was to ask things like: Write an access control entry for XYZ firewall. “This kind of query requires general knowledge and was within ChatGPT’s capabilities in April or May of 2023,” says Netskope deputy CISO James Robinson. Analysts used the public version of ChatGPT for these queries. “But we set up guidelines in place. We tell folks, ‘Don’t take any sensitive information and put it into ChatGPT.’”

As the technology evolved over the course of the year, more secure options became available, including private instances and API access. “And we’ve done more engineering to take advantage of that,” says Robinson. “We felt better about the protections that existed with APIs.”

A later use case was using it to assemble background information. “People are rotating into working on cyber threat intelligence and rotating out and need to be able to pick things up quickly,” he says. “For example, I can ask things like, ‘Have things changed with this threat actor?’” Copilot turned out to be particularly good at providing up-to-date information about threats, Robinson says.

When newly hired analysts can create threat summaries faster, they can dedicate more time to better understanding the issues. “It’s like having an assistant when moving into a new city or home, helping you discover and understand your surroundings,” Robinson says. “Only, in this case, the ‘home’ is a SOC position at a new company.”

And for SOC analysts who are already in their roles, generative AI can serve as a force multiplier, he says. “These advantages will likely evolve into the industry seeing automated analysts and even into an engineering role that can build custom rules, and conduct engineering detection, including integrating with other systems.”

GenAI helps review compliance policies

Insight is a 14,000-person solutions integrator based in Arizona that uses GenAI in its own SOC and advises enterprises on how to use it in theirs. One early use case is to review compliance policies and make recommendations, says Carm Taglienti, Insight’s chief data officer and data and AI portfolio director. For example, he says, someone could ask, “Read all my policies and tell me all the things I should be doing based on the regulatory frameworks out there and tell me how far my policies are from adhering to those recommendations. Is our policy in line with the NIST framework? What do we need to do to tighten it?”

Insight uses OpenAI running in Microsoft’s Azure private instance, combined with a data store that it can access via RAG — retrieval-augmented generation. “The knowledge base is our own internal documents plus any documents we can retrieve from NIST or ISO or any other popular groups or consortiums,” he says. “If you provide the correct context and you ask the right type of questions, then it can be very effective.”

Another possible use case is to use GenAI to create standard operating procedures for particular vulnerabilities that are in line with specific policies, based on resources such as the @MITRE database. “But we’re in the early days right now,” Taglienti says.

GenAI is also not good at workflow yet, but it’s coming, he says. “Agent-based resolution is just around the corner.” Insight is already doing some experimentation with agents, he adds. “If you detect a particular type of incident, you can use agent-based AI to remediate it, shut down the server, close the port, quarantine the application — but I don’t think we’re that mature yet.”

Future use cases for GenAI in security operations centers

The next step is to allow GenAI to go beyond summarizing information and providing advice to actually going out and doing things. Secureworks already has plugins that allow useful data to be fed to the AI system. But, at a recent hackathon, the company also tested out plugging the GenAI into its orchestration engine. “It reasons what steps it should take,” says Falkenhagen. “One of those could be, say, blocking a user and forcing a login. It could figure out which playbook to use, then call the API to execute that action without any human intervention.”

So, is the day coming when human security analysts are obsolete? Falkenhagen doesn’t think so. “What I see happening is that they’ll work on higher-value activities,” he says. “Level one triage is the worst punishment for anybody. It’s just grunt work. You’re dealing with so many alerts and so many false positives. By reducing that workload, analysts can shift to doing investigations, doing root cause analysis, doing threat hunting, and having a bigger impact.”

Falkenhagen doesn’t expect to see layoffs due to increased use of GenAI. “There is such a cybersecurity skill shortage out there today that companies struggle to hire and retain talent,” he says. “I see this as a way to put a dent in that problem. Otherwise, I don’t see how we climb out of the gap that exists. There just aren’t enough people.”

GenAI is not a magic bullet for SOCs

Recent academic studies are showing a positive impact on the productivity of entry-level analysts, says Forrester analyst JP Gownder. But there’s a caveat. “The studies also show that if you ask the AI about something beyond the frontier of its capabilities, you can start to depreciate performance,” he says. “In a security environment, you have a high bar for accuracy. Generative AI can generate magical results but also mayhem. It’s built into the nature of large language models.”

Security operations centers will need strict vetting requirements and put these solutions through their pace before widely deploying them. “And people need to be able to have the judgement to use these tools judiciously and not simply accept the answers that they’re getting,” he says.

In 2024, Gownder expects many companies will underinvest in this training aspect of generative AI. “They think that one hour in a classroom is going to get people up to speed. But there are skills that can only be cultivated over a period of time.”

Generative AI, Security Operations Center