Surviving the cyber arms race in the age of generative AI

The swift emergence of generative AI has already tipped the scales in cybersecurity, prompting action from governments, with a sweeping executive order (EO) issued in October by US President Joe Biden.

The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence offers guidance on how to ensure the safety of this emerging technology–something that has been lacking in previous orders. It also outlines the challenges associated with AI’s rapid acceleration. While the EO seeks to make domestic use of AI safe, secure, and trustworthy, perhaps the tallest order is the race to harness the potential of AI for the good guys and prevent its use for the bad guys. This raises the question: Over the next five years, who will benefit more – defenders or attackers? The answer: It remains unclear.

The one certainty is that both defenders and attackers want to reap the advantages of generative AI. What we cannot predict at this point is whether one side will gain the upper hand. It’s a race that will require an investment of time, effort, and expense from both groups, and each side will see bursts of success.

It doesn’t have to be entirely chaotic. Organizations, security practitioners, and government agencies can take steps now to ensure they keep pace with attackers and perhaps even take the lead with greater collaboration, ongoing legislative frameworks, and a secure space for innovation to thrive.

AI supercharges both threat actors and security teams

For attackers, AI adds unprecedented speed and power to social engineering and impersonation attacks, particularly at scale. Without AI, a phishing attack targeting a CFO’s email is time-consuming for attackers as they first must sift through old emails to get a sense of communication style before mimicking it in phishing emails. Generative AI models, which have demonstrated proficient writing abilities, do this very quickly, enabling a greater number of threat campaigns. Where attackers can currently launch, say, ten phishing, pig butchering, or email compromise attacks at a time, AI will allow them to execute a thousand in seconds at the click of a button.

These types of attacks are successful because an attacker can target a greater number of potential victims at one time, which undoubtedly multiplies with AI’s firepower. When used for evil, generative AI has proven to exacerbate attack intensity and the severity of outcomes.

Of course, these salvos from attackers will not go unnoticed. The industry will fight back with AI-powered technology that detects and responds to these types of attacks. But those countermeasures might take six months to develop, leaving a lot of organizations vulnerable in the meantime.

This is how an arms race works. There are reports, for instance, that the developer of a malicious AI chatbot is building even more sophisticated tools, including one that uses the entire dark web as the knowledge base for its large language model.

Meanwhile, the cybersecurity industry is answering the call and using AI to make proactive improvements as well. One obvious example is to fix existing security flaws, which has been a mostly manual, time-consuming process. This, combined with strained resources, has left legacy products vulnerable. As automated attacks continue to increase, it’s no longer tenable to remediate flaws manually. We can now use AI to shift left in the software development process and eliminate vulnerabilities at the start, prioritizing automated fixing.

AI will enable this kind of innovation in every category of cybersecurity in the next couple of years, with AI models operating as junior assistants to security operations center (SOC) teams. The hope is that with fewer vulnerabilities making it out of the development process, the easier it will be to both mitigate and defend against threats.

On the same team: Industry and government should partner over the AI arms race

Generative AI is evolving so quickly that it’s hard to predict the outcome of the cyber arms race. What we do know is that collaboration is key. While the regulation provided in the EO is an important step, it won’t solve all problems.

As tech companies continue to launch AI products, they play a key role as a partner in the race. While they round the corners of the track, the feedback they receive from customers is crucial for shaping future AI regulations that foster innovation, safeguard data, and address societal concerns. Public-private partnerships have been used across industries for decades, and the need for AI is no different. Collaboration between governments and the tech industry is key to creating a secure space for AI innovation and safety to thrive.

It’s critical that industry and government continually evaluate the guardrails in place to protect the public from unrestrained use of AI, whether by cybercriminals or established organizations. The EO promises to develop standards that will ensure AI systems are safe and tested against a rigorous set of qualifications. These qualifications and standards will require refinement over time to become truly standardized.

The US Department of Commerce will also develop guidance for watermarking and content authentication to clearly label AI-generated content, while companies like Alphabet, Meta, and OpenAI have already made commitments to implement such measures. This approach is resonant with how the US Secret Service got manufacturers of color copiers and printers to include digital watermarks on printed pages after the copiers became advanced enough to counterfeit money. However, it does bring its own unique set of challenges for bad actors to misuse.

To ensure the responsible development and deployment of AI technologies, the evolution of our legislative framework must continue. With transparency, visibility, and understanding as cornerstones, the tech industry and government can work together to mitigate risk and counteract threats.

Embracing a proactive approach to generative AI

We are entering a new phase of the cybersecurity arms race with AI in the driver’s seat. Any new technology is a double-edged sword, but the power and potential of generative AI makes its edges particularly sharp.

The cat-and-mouse game is only just beginning. How the arms race will play out is uncertain, but it is critical that defenders in both industry and government are proactive. The good news: AI has brought on landmark regulations at unprecedented speed. As we embrace this uncharted territory, improving our defensive AI strategy must be an all-hands-on-deck effort.

Advanced Persistent Threats, Generative AI, Security