Generative AI making big impact on security pros, to no one’s surprise

The wildfire spread of generative AI has already had noticeable effects, both good and bad, on the day-to-day lives of cybersecurity professionals, a study released this week by the non-profit ISC2 group has found. The study – which surveyed more than 1,120 cybersecurity pros, mostly with CISSP certification and working in managerial roles – found a considerable degree of optimism about the role of generative AI in the security realm. More than four in five (82%) said that they would at least “somewhat agree” that AI is likely to improve the efficiency with which they can do their jobs.

The respondents also saw wide-ranging potential applications for generative AI in cybersecurity work, the study found. Everything from actively detecting and blocking threats, identifying potential weak points in security, to user behavioral analysis was cited as a potential use case for generative AI. Automating repetitive tasks was also seen as a potentially valuable use for the technology.

Will generative AI help hackers more than security pros?

There was less consensus, however, as to whether the overall impact of generative AI will be positive from a cybersecurity point of view. Serious concerns around social engineering, deepfakes, and disinformation – along with a slight majority which said that AI could make some parts of their work obsolete – mean that more respondents believe AI could benefit bad actors more than security professionals.

“The fact that cybersecurity professionals are pointing to these types of information and deception attacks as the biggest concern is understandably a great worry for organizations, governments and citizens alike in this highly political year,” the study’s authors wrote.

Some of the biggest issues cited by respondents, in fact, are less concrete cybersecurity problems than they are general regulatory and ethical concerns. Fifty-nine percent said that the current lack of regulation around generative AI is a real issue, along with 55% who cited privacy issues and 52% who said data poisoning (accidental or otherwise) was a concern.

Because of those worries, substantial minorities said that they were blocking employee access to generative AI tools – 12% said their ban was total and 32% said it was partial. Just 29% said that they were allowing generative AI tool access, while a further 27% said they either hadn’t discussed the issue or weren’t sure of their organization’s policy on the matter.

Careers, Generative AI