Do the security benefits of generative AI outweigh the harms? Just 39% of security professionals say the benefits outweigh the risks, according to a new report from CrowdStrike.
In 2024, CrowdStrike surveyed 1,022 security researchers and professionals in the US, APAC, EMEA, and other regions. The survey results reveal that cyber professionals are deeply concerned about the challenges associated with AI. 64% of respondents have purchased or are researching a generative AI tool for work, but the majority remain cautious, with 32% still considering the tool and only 6 actively using it % is.
What do security researchers want from generative AI?
According to the report:
The top motivation for deploying generative AI is not to address skills shortages or fulfill leadership responsibilities, but to improve the ability to respond to and defend against cyberattacks. Commonly used AI isn’t necessarily appealing to cybersecurity professionals. Instead, they want generative AI coupled with security expertise. 40% of respondents said the benefits and risks of generative AI are “equal.” Meanwhile, 39% said the benefits outweigh the risks, and 26% said the benefits outweighed the risks.
“Security teams are using GenAI as part of their platform to extract more value from existing tools, improve the analyst experience, accelerate onboarding, and eliminate the complexity of integrating new point solutions. ”, the report states.
Measuring ROI is an ongoing challenge when implementing generative AI products. CrowdStrike found that quantifying ROI was the top financial concern among respondents. The next top two concerns were the cost of licensing AI tools and unpredictable or confusing pricing models.
CrowdStrike has categorized ways to evaluate AI ROI into four categories, ranked by importance.
Optimize costs by consolidating platforms and using more efficient security tools (31%). Reduced security incidents (30%). Reduced time spent managing security tools (26%). Shorter training cycles and associated costs (13%).
Adding AI to an existing platform, rather than purchasing a standalone AI product, can potentially “realize incremental savings in relation to broader platform integration efforts,” CrowdStrike says. says.
See also: Ransomware group claimed responsibility for late November cyberattack that disrupted operations at Starbucks and other organizations.
Must-read security content
Can generative AI create more security problems than it solves?
Conversely, the generative AI itself must be secured. According to CrowdStrike research, security experts are most concerned about data leaks to the LLMs behind AI products and attacks carried out against generative AI tools.
Other concerns included:
Generative AI tools lack guardrails and controls. AI illusion. Insufficient public policy regulation of the use of generative AI.
Almost all respondents (9 in 10) said their organization has implemented new security policies or is developing policies for managing generated AI within the next year.
How organizations can leverage AI to protect against cyber threats
Generative AI can be used for brainstorming, research, or analysis, with the understanding that that information often needs to be double-checked. Generative AI brings data from disparate sources into a single window in a variety of formats, reducing the time it takes to investigate incidents. Many automated security platforms offer generative AI assistants, such as Microsoft’s Security Copilot.
GenAI can protect against cyber threats in the following ways:
Threat detection and analysis. Automated incident response. Phishing detection. Enhanced security analysis. Synthetic data for training.
However, organizations must consider safety and privacy controls as part of their generative AI purchases. Doing so helps protect sensitive data, comply with regulations, and reduce risks such as data breaches and misuse. Without appropriate safeguards, AI tools can expose vulnerabilities, produce harmful output, or violate privacy laws, leading to financial, legal, and reputational damage.