Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025

Japan’s innovative approach to artificial intelligence law – gktoday

June 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, June 7
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»CrowdStrike study highlights security challenges in AI deployments
Cybersecurity

CrowdStrike study highlights security challenges in AI deployments

By December 17, 2024Updated:February 13, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

Do the security benefits of generative AI outweigh the harms? Just 39% of security professionals say the benefits outweigh the risks, according to a new report from CrowdStrike.

In 2024, CrowdStrike surveyed 1,022 security researchers and professionals in the US, APAC, EMEA, and other regions. The survey results reveal that cyber professionals are deeply concerned about the challenges associated with AI. 64% of respondents have purchased or are researching a generative AI tool for work, but the majority remain cautious, with 32% still considering the tool and only 6 actively using it % is.

What do security researchers want from generative AI?

According to the report:

The top motivation for deploying generative AI is not to address skills shortages or fulfill leadership responsibilities, but to improve the ability to respond to and defend against cyberattacks. Commonly used AI isn’t necessarily appealing to cybersecurity professionals. Instead, they want generative AI coupled with security expertise. 40% of respondents said the benefits and risks of generative AI are “equal.” Meanwhile, 39% said the benefits outweigh the risks, and 26% said the benefits outweighed the risks.

“Security teams are using GenAI as part of their platform to extract more value from existing tools, improve the analyst experience, accelerate onboarding, and eliminate the complexity of integrating new point solutions. ”, the report states.

Measuring ROI is an ongoing challenge when implementing generative AI products. CrowdStrike found that quantifying ROI was the top financial concern among respondents. The next top two concerns were the cost of licensing AI tools and unpredictable or confusing pricing models.

CrowdStrike has categorized ways to evaluate AI ROI into four categories, ranked by importance.

Optimize costs by consolidating platforms and using more efficient security tools (31%). Reduced security incidents (30%). Reduced time spent managing security tools (26%). Shorter training cycles and associated costs (13%).

Adding AI to an existing platform, rather than purchasing a standalone AI product, can potentially “realize incremental savings in relation to broader platform integration efforts,” CrowdStrike says. says.

See also: Ransomware group claimed responsibility for late November cyberattack that disrupted operations at Starbucks and other organizations.

Must-read security content

Can generative AI create more security problems than it solves?

Conversely, the generative AI itself must be secured. According to CrowdStrike research, security experts are most concerned about data leaks to the LLMs behind AI products and attacks carried out against generative AI tools.

Other concerns included:

Generative AI tools lack guardrails and controls. AI illusion. Insufficient public policy regulation of the use of generative AI.

Almost all respondents (9 in 10) said their organization has implemented new security policies or is developing policies for managing generated AI within the next year.

How organizations can leverage AI to protect against cyber threats

Generative AI can be used for brainstorming, research, or analysis, with the understanding that that information often needs to be double-checked. Generative AI brings data from disparate sources into a single window in a variety of formats, reducing the time it takes to investigate incidents. Many automated security platforms offer generative AI assistants, such as Microsoft’s Security Copilot.

GenAI can protect against cyber threats in the following ways:

Threat detection and analysis. Automated incident response. Phishing detection. Enhanced security analysis. Synthetic data for training.

However, organizations must consider safety and privacy controls as part of their generative AI purchases. Doing so helps protect sensitive data, comply with regulations, and reduce risks such as data breaches and misuse. Without appropriate safeguards, AI tools can expose vulnerabilities, produce harmful output, or violate privacy laws, leading to financial, legal, and reputational damage.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticlePreparing for AI in the new security environment | American Enterprise Institute
Next Article Ai-Media directors directly increase capital

Related Posts

Cybersecurity

Rubrik expands AI Ready Cloud Security’s AMD partnership to reduce costs by 10%

June 3, 2025
Cybersecurity

Zscaler launches an advanced AI security suite to protect your enterprise data

June 3, 2025
Cybersecurity

Why AI behaves so creepy when faced with shutdown

June 3, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Gemini 2.5 Pro Preview: Even better coding performance

May 13, 20254 Views

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Deepseek’s latest AI model is a “big step back” for free speech

May 31, 20255 Views

Gemini 2.5 Pro Preview: Even better coding performance

May 13, 20254 Views

New Star: Discover why 보니 is the future of AI art

February 26, 20254 Views
Don't Miss

ClarityCut ​​AI unveils a new creative engine for branded videos

June 7, 2025

The most comprehensive evaluation suite for GUI agents!

June 7, 2025

Japan’s innovative approach to artificial intelligence law – gktoday

June 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?