Accelerate threat response and democratize SOC skill sets with generative AI

How much more could your organization accomplish if you could automate common, repeatable tasks across security, compliance, identity, and management?

Managing an organization’s defenses is a challenging and time-consuming task for many different reasons. Adopting and integrating new security technology takes time and resources to monitor and maintain alongside the company’s existing technology portfolio. Security teams also have to keep pace with the rapidly accelerating speed of attackers. Microsoft research shows it takes attackers just one hour and 12 minutes on average to access private data once an unsuspecting user has clicked on a phishing email. Underpinning all of these challenges, however, is the ongoing cybersecurity talent shortage.

As alerts come in, security teams must properly vet and investigate each one according to the procedures outlined in their company’s cybersecurity playbook. This is especially difficult when organizations lack an adequate number of experienced SOC analysts. Investigating and responding to alerts is also a highly resource-intensive task that often involves correlating data across multiple telemetry sources and documenting findings along the way.

However, generative AI can greatly streamline and democratize these tasks so your organization can maximize its existing security resources and respond to emerging threats more quickly. Read on to learn how.

Streamline SOC workflows with generative AI

Generative AI represents a step-change in how practitioners investigate and respond to incidents, threats, and vulnerabilities. When enriched with sufficient security data and threat intelligence, generative AI can use natural language processing (NLP) to easily interface with users, allowing them to ask questions and receive answers in a more natural format. NLP also gives generative AI the flexibility to “understand” what a user is asking and adapt to their style or preferences.

Consider the example of a device that was locked out due to conditional access policy violations. Normally, the analyst would need to go into the support ticket, investigate the device’s status, and determine why the device was locked out before finding a resolution for the problem. Generative AI can greatly accelerate this process.

At Microsoft, our generative AI models use plugins and a framework to connect to solutions and answer these types of questions. We also build sessions that use context to inform responses and reporting asks. Rather than having to manually seek information on a device’s status or the reason for lockout, analysts can simply ask the generative AI model to provide the user’s most recent login attempts and risk status. Assuming the model has access to the proper data sources and is able to reason over past context, analysts can then ask the AI to run a hunting query to understand what’s happening in the environment. If the analyst determines that a true security incident is taking place, the AI model can also correlate that activity against recent security incidents to provide more context and recommend next steps.

Furthermore, generative AI can be used to document the analyst’s actions and findings along the way. This real-time reporting is critical in helping other members of the security or executive team understand what happened and how it was resolved. This report can include everything from when the incident occurred and what devices were involved to suspected threat actors, protocols used, processes, login attempts, and more. Documenting all of this information could historically take an analyst hours, however, generative AI can assemble it in a matter of minutes.

Enrich analysts with automated recommendations and pre-defined workflows

In addition to helping analysts move faster, generative AI also helps to democratize your security team’s skill sets. Not every member of your security team has the same level of experience or expertise. Generative AI helps close this gap by providing analysts with automated recommendations and guidance based on their organization’s security data and processes, as well as cybersecurity best practices.

At Microsoft, we use promptbooks—a curated list of individual prompts that facilitate common workflows across security, compliance, identity, and management. These promptbooks are essentially pre-defined workflows that guide security teams through common actions like running incident investigations, creating threat actor profiles, analyzing suspicious scripts, and conducting vulnerability impact assessments. By leveraging the NLP embedded within promptbooks, security teams can create consistent, measurable processes that require minimal input from users to run.

Generative AI has the capacity to transform security, compliance, identity, and management within the enterprise. It will save practitioners time, equip them with new skills, and ensure their time is spent on what matters most for the organization. We just need to extend our thinking and how generative AI is applied in operational roles.

To learn more about deploying generative AI in your environment, visit Microsoft Security Insider and explore our AI-powered cybersecurity product, Microsoft Copilot for Security.

Artificial Intelligence, Machine Learning