Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

YouTube vows to fight ‘AI slop’ in 2026

January 23, 2026

Spreading real-time interactive video with Overworld

January 23, 2026

YouTube now lets creators create their own AI Shorts

January 23, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, January 23
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Cybersecurity»CrowdStrike study highlights security challenges in AI deployments
Cybersecurity

CrowdStrike study highlights security challenges in AI deployments

By December 17, 2024Updated:February 13, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Do the security benefits of generative AI outweigh the harms? Just 39% of security professionals say the benefits outweigh the risks, according to a new report from CrowdStrike.

In 2024, CrowdStrike surveyed 1,022 security researchers and professionals in the US, APAC, EMEA, and other regions. The survey results reveal that cyber professionals are deeply concerned about the challenges associated with AI. 64% of respondents have purchased or are researching a generative AI tool for work, but the majority remain cautious, with 32% still considering the tool and only 6 actively using it % is.

What do security researchers want from generative AI?

According to the report:

The top motivation for deploying generative AI is not to address skills shortages or fulfill leadership responsibilities, but to improve the ability to respond to and defend against cyberattacks. Commonly used AI isn’t necessarily appealing to cybersecurity professionals. Instead, they want generative AI coupled with security expertise. 40% of respondents said the benefits and risks of generative AI are “equal.” Meanwhile, 39% said the benefits outweigh the risks, and 26% said the benefits outweighed the risks.

“Security teams are using GenAI as part of their platform to extract more value from existing tools, improve the analyst experience, accelerate onboarding, and eliminate the complexity of integrating new point solutions. ”, the report states.

Measuring ROI is an ongoing challenge when implementing generative AI products. CrowdStrike found that quantifying ROI was the top financial concern among respondents. The next top two concerns were the cost of licensing AI tools and unpredictable or confusing pricing models.

CrowdStrike has categorized ways to evaluate AI ROI into four categories, ranked by importance.

Optimize costs by consolidating platforms and using more efficient security tools (31%). Reduced security incidents (30%). Reduced time spent managing security tools (26%). Shorter training cycles and associated costs (13%).

Adding AI to an existing platform, rather than purchasing a standalone AI product, can potentially “realize incremental savings in relation to broader platform integration efforts,” CrowdStrike says. says.

See also: Ransomware group claimed responsibility for late November cyberattack that disrupted operations at Starbucks and other organizations.

Must-read security content

Can generative AI create more security problems than it solves?

Conversely, the generative AI itself must be secured. According to CrowdStrike research, security experts are most concerned about data leaks to the LLMs behind AI products and attacks carried out against generative AI tools.

Other concerns included:

Generative AI tools lack guardrails and controls. AI illusion. Insufficient public policy regulation of the use of generative AI.

Almost all respondents (9 in 10) said their organization has implemented new security policies or is developing policies for managing generated AI within the next year.

How organizations can leverage AI to protect against cyber threats

Generative AI can be used for brainstorming, research, or analysis, with the understanding that that information often needs to be double-checked. Generative AI brings data from disparate sources into a single window in a variety of formats, reducing the time it takes to investigate incidents. Many automated security platforms offer generative AI assistants, such as Microsoft’s Security Copilot.

GenAI can protect against cyber threats in the following ways:

Threat detection and analysis. Automated incident response. Phishing detection. Enhanced security analysis. Synthetic data for training.

However, organizations must consider safety and privacy controls as part of their generative AI purchases. Doing so helps protect sensitive data, comply with regulations, and reduce risks such as data breaches and misuse. Without appropriate safeguards, AI tools can expose vulnerabilities, produce harmful output, or violate privacy laws, leading to financial, legal, and reputational damage.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticlePreparing for AI in the new security environment | American Enterprise Institute
Next Article Ai-Media directors directly increase capital

Related Posts

Cybersecurity

Uttar Pradesh Govt will use AI, monitor social media and implement strict security for the RO/ARO exam on July 27th

July 21, 2025
Cybersecurity

Reolink Elite Floodlight Camera has AI search without subscription

July 21, 2025
Cybersecurity

A new era of learning

July 21, 2025
Add A Comment

Comments are closed.

Top Posts

How OSTP’s Kratsios sees the future of U.S. AI law and NIST’s role

January 16, 20268 Views

Things security leaders need to know

July 9, 20256 Views

Important biases in AI models used to detect depression on social media

July 3, 20256 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

How OSTP’s Kratsios sees the future of U.S. AI law and NIST’s role

January 16, 20268 Views

Things security leaders need to know

July 9, 20256 Views

Important biases in AI models used to detect depression on social media

July 3, 20256 Views
Don't Miss

YouTube vows to fight ‘AI slop’ in 2026

January 23, 2026

Spreading real-time interactive video with Overworld

January 23, 2026

YouTube now lets creators create their own AI Shorts

January 23, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?