Emerging cyber threats in 2023 from AI to quantum to data poisoning

Companies using Microsoft Teams got news earlier in the summer of 2023 that a Russian hacker group was using the platform to launch phishing attacks, putting a new spin on a long-known attack strategy. According to Microsoft Threat Intelligence, the hackers, identified as Midnight Blizzard, used Microsoft 365 tenants owned by small businesses compromised in previous attacks to host and launch new social engineering attacks.

Threats evolve constantly as hackers and grifters gain access to new technologies or come up with new ways to exploit old vulnerabilities. “It’s a cat and mouse game,” says Mark Ruchie, CISO of security firm Entrust.

Phishing remains the most common attack, with the 2023 Comcast Business Cybersecurity Threat Report finding that nine out of 10 attempts to breach its customers’ networks started with a phish.

The volume and velocity of attacks have increased, as have the costs incurred by victims, with the 2022 Official Cybercrimes Report from Cybersecurity Ventures estimating that the cost of cybercrime will jump from $3 trillion in 2015 to a projected $10.5 trillion in 2025.

At the same time, security leaders say they see new takes on standard attack methods — such as the attacks launched by Midnight Blizzard (which has also been identified by the names APT29, Cozy Bear and NOBELIUM) — as well as novel attack strategies. Data poisoning, SEO poisoning and AI-enabled threat actors are among the emerging threats facing CISOs today.

“The moment you agree to be a CISO, you agree to get into a race you never win completely, and there are constantly evolving things that you have to have on your screen,” says Andreas Wuchner, field CISO for security company Panaseer and a member of the company’s advisory board.

AI- and generative AI-enabled attacks

Some of the most notable emerging threats stem from the rapid maturing and proliferation of artificial intelligence, experts say. Security officials have witnessed hackers adopt AI at a pace that rivals — and sometimes surpasses — that of enterprise technology teams.

The potential of AI-enabled attacks wasn’t unexpected. According to a 2019 Forrester Research report, 80% of cybersecurity decision-makers expected AI to increase the scale and speed of attacks and 66% expected AI “to conduct attacks that no human could conceive of.”

The report further stated that “these attacks will be stealthy and unpredictable in a way that enables them to evade traditional security approaches that rely on rules and signatures and only reference historical attacks.”

That’s happening now, some experts say.

“AI-enabled cyberattacks are already a threat that organizations are unable to cope with. This security threat will only grow as we witness new advances in AI methodology, and as AI expertise becomes more widely available,” assert the authors of a December 2022 report from the Finnish Transport and Communications Agency in conjunction with the Helsinki-headquartered cybersecurity company WithSecure.

According to that report, hackers are using AI to analyze attack strategies, thereby enhancing their likelihood of success. Hackers are also using AI to heighten the speed, scale and scope of their activities.

Cybersecurity leaders point to additional emerging threats posed by AI — and more specifically generative AI. First up is the hackers’ use of gen AI to develop malware. There’s also their use of it to create more phishing and smishing messages with content that accurately mimics the language, tone, and design of legitimate emails.

That eliminates the awkward diction or sloppy graphics that often help identify them as malicious messages. As Ruchie says, “The phishing emails today are getting more savvy, but generative AI is sure to ramp that up to a level not seen before.”

Kayne McGladrey, field CISO at Hyperproof, has seen the evidence. He worked with one organization whose executives received a contract for review and signature. “Nearly everything looked right,” McGladrey says. The only noticeable mistake was a minor error in the company’s name, which the chief counsel caught.

But Gen AI isn’t just boosting the hackers’ speed and sophistication, it’s also expanding their reach, McGladrey says. Hackers can now use gen AI to create phishing campaigns with believable text in nearly any language, including those that have seen fewer attack attempts to date because the language is hard to learn or rarely spoken by non-native speakers.

“If nothing else, generative AI does a great job at translating content, so countries that haven’t experienced many phishing attempts so far may soon see more,” McGladrey adds.

Others warn that other AI-enabled threats are on the horizon, saying they expect hackers will use deepfakes to mimic individuals — such as high-profile executives and civic leaders (whose voices and images are widely and publicly available for which to train AI models).

“It’s definitely something we’re keeping an eye on, but already the possibilities are pretty clear. The technology is getting better and better, making it harder to discern what’s real,” says Ryan Bell, threat intelligence manager at cyber insurance provider Corvus, citing the use of deepfake images of Ukrainian President Volodymyr Zelensky to pass along disinformation as evidence of the technology’s use for nefarious purposes.

Moreover, the Finnish report offered a dire assessment of what’s ahead: “In the near future, fast-paced AI advances will enhance and create a larger range of attack techniques through automation, stealth, social engineering, or information gathering. Therefore, we predict that AI-enabled attacks will become more widespread among less skilled attackers in the next five years. As conventional cyberattacks will become obsolete, AI technologies, skills and tools will become more available and affordable, incentivizing attackers to make use of AI-enabled cyberattacks.”

Hijacking enterprise AI

On a related note, some security experts say hackers could use an organization’s own chatbots against them.

As is the case with more conventional attack scenarios, attackers could try to hack into the chatbot systems to steal any data within those systems or to use them to access other systems that hold greater value to the bad actors.

That, of course, is not particularly novel. What is, though, is the potential for hackers to repurpose compromised chatbots and then use them as conduits to spread malware or perhaps interact with others — customers, employees, or other systems — in nefarious ways, says Matt Landers, a security engineer with security firm OccamSec.

Similar warnings recently came from Voyager18, the cyber risk research team, and security software company Vulcan. These researchers published a June 2023 advisory detailing how hackers could use generative AI, including ChatGTP, to spread malicious packages into developers’ environments.

Wuchnersays the new threats posed by AI don’t end there. He says organizations could find that errors, vulnerabilities, and malicious code could enter the enterprise as more workers — particularly workers outside IT — use gen AI to write code so they can quickly deploy it for use.

“All the studies show how easy it is to create scripts with AI, but trusting these technologies is bringing things into the organization that no one ever thought about,” Wuchner adds.

Quantum computing

The United States passed the Quantum Computing Cybersecurity Preparedness Act in December 2022, codifying into law a measure aimed at securing federal government systems and data against the quantum-enabled cyberattacks that many expect will happen as quantum computing matures.

Several months later, in June 2023, the European Policy Centre urged similar action, calling on European officials to prepare for the advent of quantum cyberattacks — an anticipated event dubbed Q-Day.

According to experts, work on quantum computing could advance enough in the next five to 10 years to reach the point where it has the capability of breaking today’s existing cryptographic algorithms — a capability that could make all digital information protected by current encryption protocols vulnerable to cyberattacks.

“We know quantum computing will hit us in three to 10 years, but no one really knows what the full impact will be yet,” Ruchie says. Worse still, he says bad actors could use quantum computing or quantum computing paired with AI to “spin out new threats.”

Data and SEO poisoning

Another threat that has emerged is data poisoning, says Rony Thakur, collegiate associate professor at the University of Maryland Global Campus’ School of Cybersecurity and IT.

With data poisoning, attackers tamper or corrupt the data used to train machine learning and deep-learning models. They can do so using a variety of techniques. Sometimes also called model poisoning, this attack aims to affect the accuracy of the AI’s decision-making and outputs.

As Thakur summarizes: “You can manipulate algorithms by poisoning the data.”

He notes that both insider and external bad actors are capable of data poisoning. Moreover, he says many organizations lack the skills to detect such a sophisticated attack. Although organizations have yet to see or report such attacks at any scale, researchers have explored and demonstrated that hackers could, in fact, be capable of such attacks.

Others cite an additional “poisoning” threat: search engine optimization (SEO) poisoning, which most commonly involves the manipulation of search engine rankings to redirect users to malicious websites that will install malware on their devices. Info-Tech Research Group called out the SEO poisoning threat in its June 2023 Threat Landscape Briefing, calling it a growing threat.

Preparing for what’s next

A majority of CISOs are anticipating a changing threat landscape: 58% of security leaders expect a different set of cyber risks in the upcoming five years, according to a poll taken by search firm Heidrick & Struggles for its 2023 Global Chief Information Security Officer (CISO) Survey.

CISOs list AI and machine learning as the top themes in most significant cyber risks, with 46% saying as much. CISOs also list geopolitical, attacks, threats, cloud, quantum, and supply chain as other top cyber risk themes.

Authors of the Heidrick & Struggles survey noted that respondents offered some thoughts on the topic. For example, one wrote that there will be “a continued arms race for automation.” Another wrote, “As attackers increase [the] attack cycle, respondents must move faster.” A third shared that “Cyber threats [will be] at machine speed, whereas defenses will be at human speed.”

The authors added, “Others expressed similar concerns, that skills will not scale from old to new. Still others had more existential fears, citing the ‘dramatic erosion in our ability to discern truth from fiction.'”

Security leaders say the best way to prepare for evolving threats and any new ones that might emerge is to follow established best practices while also layering in new technologies and strategies to strengthen defenses and create proactive elements into enterprise security.

“It’s taking the fundamentals and applying new techniques where you can to advance [your security posture] and create a defense in depth so you can get to that next level, so you can get to a point where you could detect anything novel,” says Norman Kromberg, CISO of security software company NetSPI. “That approach could give you enough capability to identify that unknown thing.”

Advanced Persistent Threats, Hacking, Security, Threat and Vulnerability Management, Vulnerabilities