Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Republicans are trying to boost AI while tightening grips on social media and online speeches

May 17, 2025

Face x Langchain embrace: a new partner package

May 17, 2025

What is the security attitude of campus in the age of AI? – Campus Technology

May 17, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, May 17
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»AI Security Status in 2025: Key Insights from the Cisco Report
Cybersecurity

AI Security Status in 2025: Key Insights from the Cisco Report

versatileaiBy versatileaiMay 16, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

As more companies adopt AI, understanding their security risks has become more important than ever. AI is restructuring industries and workflows, but it also introduces new security challenges that organizations have to deal with. Protecting AI systems is essential to maintaining trust, protecting privacy and ensuring smooth business operations. This article summarizes key insights from Cisco’s recent AI Security Status 2025 report. This provides an overview of where AI security stands today and what businesses should consider the future.

Increased security threats to AI

If you taught me something in 2024, then adopting AI means that many organizations are moving faster than they can secure it. According to a Cisco report, around 72% of organizations today use AI for business functions, but only 13% feel ready to safely maximize their potential. This gap between recruitment and preparation is driven primarily by security concerns, a major barrier to wider use of corporate AI. What is even more concerning about this situation is that AI will introduce new types of threats that are not equipped to fully handle traditional cybersecurity methods. Unlike traditional cybersecurity, which often protects fixed systems, AI poses dynamic and adaptive threats that are difficult to predict. The report highlights some new threats that organizations should recognize:

Infrastructure Attacks: AI infrastructure has become a major target for attackers. A notable example is the compromise of Nvidia’s container toolkit, which allows attackers to access file systems, execute malicious code, and escalate privileges. Similarly, Ray, an open source AI framework for GPU management, was breached in one of the first real-world AI framework attacks. These cases demonstrate how weaknesses in AI infrastructure affect many users and systems. Supply Chain Risk: The vulnerability of the AI ​​supply chain cites another important concern. Approximately 60% of organizations rely on open source AI components or ecosystems. This creates a risk because attackers can undermine these widely used tools. The report mentions a technique known as “Sleepy Pickle.” This allows enemies to tamper with AI models even after distribution. This makes detection extremely difficult. AI-specific attacks: New attack technologies are evolving rapidly. Methods such as rapid injection, jailbreaking, and training data extraction allow attackers to bypass safety controls and access sensitive information contained in training datasets.

Attack vectors targeting AI systems

The report highlights the emergence of attack vectors used by malicious actors to exploit the weaknesses of AI systems. These attacks can occur at various stages of the AI ​​lifecycle, from data collection and model training to deployment and inference. The goal is often to make AI work in an unintended way, leak private data, and carry out harmful actions.

In recent years, these attack methods have become more sophisticated and difficult to detect. The report highlights several types of attack vectors.

Jailbreak: This technique involves creating adversarial prompts that bypass the safety measures in the model. Despite improvements in AI defense, Cisco research shows that simple jailbreaks continue to be effective against advanced models such as the Deepseek R1. Indicator Quick Injection: Unlike direct attacks, this attack vector manipulates the input data or the context used indirectly by the AI ​​model. Attackers can provide compromised source material, such as malicious PDFs and web pages, which can cause AI to produce unintended or harmful output. These attacks are particularly dangerous as they do not require direct access to AI systems and cause attackers to bypass many traditional defenses. Bypasses many traditional defenses. Data extraction and addiction training: Cisco researchers have demonstrated that chatbots can be tricked into revealing some of their training data. This raises serious concerns about data privacy, intellectual property and compliance. Attackers can also poison training data by injecting malicious input. Surprisingly, only 0.01% of large datasets such as the LAION-400M and the Coyo-700M can affect the behavior of the model, which can be done in small quantities (about $60), allowing many bad actors to access these attacks.

The report highlights serious concerns about the current state of these attacks, with researchers achieving 100% success rates for advanced models such as the Deepseek R1 and Llama 2. Additionally, the report identifies the emergence of new threats, such as voice-based jailbreaks, specifically designed to target multimodal AI models.

Findings from Cisco AI Security Survey

The Cisco research team evaluated various aspects of AI security and uncovered several important findings.

Jailbreaking of algorithms: Researchers have shown that even top AI models can be automatically fooled. Using a method called Tree of Attacks using Pruning (TAP), researchers bypassed protection for GPT-4 and Llama 2. However, researchers have found that fine tuning can weaken internal safety guardrails. The tweaked version was more than three times more vulnerable than the original model, taking 22 times more likely to produce more harmful content than the original model. Data Extraction: Cisco researchers used a simple decomposition method to recreate fragments of news articles that allow chatbots to be tricked into reconstructing the source of material. This poses the risk of publishing sensitive or unique data. DataPionsing: Data Pioning: The Cisco team shows how easy and inexpensive it is to poison large web datasets. For around $60, the researchers were able to poison 0.01% of datasets such as the LAION-400M and Coyo-700M. Furthermore, they emphasize that this level of addiction is sufficient to cause significant changes in the behavior of the model.

The role of AI in cybercrime

AI is becoming more than just a target, it is becoming a tool for cybercriminals. The report notes that automation and AI-driven social engineering have made attacks more effective and difficult to find. From phishing scams to voice cloning, AI can help criminals create persuasive, personalized attacks. The report also identifies the rise of malicious AI tools such as “Darkpt,” specifically designed to support cybercrime by generating phishing emails and exploiting vulnerabilities. What is particularly concerning about these tools is their accessibility. Even low-skilled criminals can now create highly personalized attacks that avoid traditional defenses.

Best Practices for Securing AI

Given the unstable nature of AI security, Cisco recommends some practical steps for organizations.

Manage risk throughout the AI ​​lifecycle: It is important to identify risks at every stage of the AI ​​lifecycle and identify and reduce risks from data sourcing and model training to deployment and monitoring. This includes security for third-party components, enforcing strong guardrails, and tightly controlled access points. Techniques such as access control, permission management, and data loss prevention can focus on vulnerable areas. Organizations should focus on areas that may be targeted, such as supply chains and third-party AI applications. Understanding where the vulnerabilities lie can help businesses implement more targeted defenses. Employee education and training: As AI tools become more widespread, it is important to train users on responsible AI use and risk awareness. An informed workforce helps reduce accidental data exposure and misuse.

Looking ahead

AI adoption continues to grow, which will lead to evolution in security risks. Governments and organizations around the world are aware of these challenges and are beginning to build policies and regulations to guide AI security. As Cisco reports highlight, the balance between AI safety and progress defines the next era of AI development and deployment. Organizations that prioritize security along with innovation are great for dealing with challenges and seizing new opportunities.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI tools speed up government feedback and experts caution
Next Article What is the security attitude of campus in the age of AI? – Campus Technology
versatileai

Related Posts

Cybersecurity

What is the security attitude of campus in the age of AI? – Campus Technology

May 17, 2025
Cybersecurity

Protect your AI to be acquired by AI security pioneer Palo Alto Networks

May 14, 2025
Cybersecurity

Delinea strengthens cloud identity platform to protect AI at scale

May 13, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

Soulgen revolutionizes the creation of NSFW content

May 11, 20252 Views

UWI Five Islands Campus will host the AI ​​Research Conference

May 10, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

Soulgen revolutionizes the creation of NSFW content

May 11, 20252 Views

UWI Five Islands Campus will host the AI ​​Research Conference

May 10, 20252 Views
Don't Miss

Republicans are trying to boost AI while tightening grips on social media and online speeches

May 17, 2025

Face x Langchain embrace: a new partner package

May 17, 2025

What is the security attitude of campus in the age of AI? – Campus Technology

May 17, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?