The wildfire spread of generative AI has already had noticeable effects, both good and bad, on the day-to-day lives of cybersecurity professionals, a study released this week by the non-profit ISC2 group has found. The study â which surveyed more than 1,120 cybersecurity pros, mostly with CISSP certification and working in managerial roles â found a considerable degree of optimism about the role of generative AI in the security realm. More than four in five (82%) said that they would at least âsomewhat agreeâ that AI is likely to improve the efficiency with which they can do their jobs.
The respondents also saw wide-ranging potential applications for generative AI in cybersecurity work, the study found. Everything from actively detecting and blocking threats, identifying potential weak points in security, to user behavioral analysis was cited as a potential use case for generative AI. Automating repetitive tasks was also seen as a potentially valuable use for the technology.
Will generative AI help hackers more than security pros?
There was less consensus, however, as to whether the overall impact of generative AI will be positive from a cybersecurity point of view. Serious concerns around social engineering, deepfakes, and disinformation â along with a slight majority which said that AI could make some parts of their work obsolete â mean that more respondents believe AI could benefit bad actors more than security professionals.
âThe fact that cybersecurity professionals are pointing to these types of information and deception attacks as the biggest concern is understandably a great worry for organizations, governments and citizens alike in this highly political year,â the studyâs authors wrote.
Some of the biggest issues cited by respondents, in fact, are less concrete cybersecurity problems than they are general regulatory and ethical concerns. Fifty-nine percent said that the current lack of regulation around generative AI is a real issue, along with 55% who cited privacy issues and 52% who said data poisoning (accidental or otherwise) was a concern.
Because of those worries, substantial minorities said that they were blocking employee access to generative AI tools â 12% said their ban was total and 32% said it was partial. Just 29% said that they were allowing generative AI tool access, while a further 27% said they either hadnât discussed the issue or werenât sure of their organizationâs policy on the matter.