Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 2025

Promote your creativity with new generation media models and tools

May 30, 2025

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, May 30
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»Why AI breaks traditional security stacks and how to fix it
Cybersecurity

Why AI breaks traditional security stacks and how to fix it

versatileaiBy versatileaiMay 27, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email
Commentary: AI is deployed faster than the industry can protect. Whether it’s an LLM-based assistant, workflow with genai, or an automated decision for agent AI, traditional security tooling is not designed for this. Firewalls, EDR, SIEM, DLP-NONE are not built for hallucinations, evolving systems. Here. ) Most of the time, they can’t even see the model, and the enemy can do so, not to mention ensuring it. From data addiction and rapid injection to modeling theft and agent overthrow, attackers take advantage of blind spots that traditional tools cannot protect. The AI ​​attack surface is not just broad. It’s fundamentally different.

Why traditional tools are missing?

Most legacy security tools were built to protect deterministic systems, environments where software follows predictable logic and outcomes. Inputs are defined and the team can reasonably expect output. AI systems, particularly generative and agents, learn from data that is often derived from dynamic, unique or external sources, allowing attackers to tamper with the learning process. Techniques like data addiction allow malicious actors to subtly manipulate training data to produce harmful results later. This allows the AI ​​model to be misused through rapid injection even after training, not after making the dish, but rather tampering with ingredients in the recipe. These attacks embed malicious instructions in seemingly innocent inputs and redirected the behavior of the model without system-level compromise. Agent AI that can act autonomously poses even greater risk. Imagine an AI assistant reading a website with secret commands embedded in it. You may take unauthorized actions without purchasing, missing information or detecting it. These are just a few examples. Traditional web app scanners, antivirus tools, and SIEM platforms have not been built for this reality. For the AI ​​world, it’s more than just a best practice. it’s necessary. For AI, design-safe means integrating protection in the lifecycle of Machine Learning Security Operations (MLSECOPS) from initial scoping, model selection, data preparation to training, testing, deployment and monitoring. It also means adapting classic security principles of confidentiality, integrity, and availability (CIA) to fit the AI-specific context.

Confidentiality: Protects training datasets and model parameters from leaks and reverse engineering. Integrity: Beware of hostile input manipulation that distorts training data, model files, and output.

New toolset for AI security

A robust security attitude requires hierarchical defenses that take into account each phase of the AI ​​pipeline and predict how AI systems will be manipulated directly and indirectly. Here are some categories to prioritize:

1. Model scanner and red team.

Static scanners look for backdoors, embedded biases, and insecure output in model code or architecture. Dynamic tools simulate adversarial attacks to test runtime behavior. These will be complemented by AI’s red teaming. Testing for injection vulnerabilities, model extraction risk, or harmful emergency behavior.

2. AI-specific vulnerability feed.

Traditional CVEs do not capture the rapidly evolving threats with AI. Organizations need a model architecture, new rapid injection patterns, and real-time feeds that track data supply chain risk vulnerabilities. This information helps you prioritize AI-specific patching and mitigation strategies.

3. AI access control.

AI models often interact with vector databases, embeddings (numerical representations of meaning used to compare concepts of high-dimensional space), and unstructured data, making it difficult to enforce traditional column or field-level access control. AI AWARE access helps regulate the content used during inference and ensures proper separation between models, datasets, and users.

4. Monitoring and drift detection.

AI is dynamic. Learn, adapt, and sometimes drift. Organizations need monitoring capabilities to track changes in inference patterns, detect behavioral abnormalities, and record complete input and output exchanges for forensics and compliance. For Agent AI, it includes decision path tracking and mapping activities across multiple systems.

5. Automating policy enforcement and response.

Real-time protection, which acts like an “AI Firewall,” can intercept prompts or output that violates content policies, such as malware generation or leaking sensitive information. An automated response mechanism can quarantine a model, revoke it, or roll back deployments within milliseconds.

A framework that guides implementation

Luckily, security teams don’t have to start from scratch. Some frameworks provide a solid blueprint for building security into AI workflows.

LLMS’ OWASP Top 10 (2025) highlights certain risks such as rapid injection, data addiction and handling of unstable outputs. MapsItlasAtlas maps AI attack kill chains and provides tactics and mitigation from reconnaissance, while exfiltration.nist AI-RMF offers an approach that is attributable to governance.

Integrating these frameworks with MLSecops practices helps organizations ensure the right layers at the right time, using the right controls. Start by enabling security teams to visualize the AI ​​development pipeline. Building a bridge between data science and engineering peers. Invest in training staff on new threats and specialized tools. AI sequels are not just a tool challenge, they are strategic change. As AI systems evolve, an approach to risk, accountability and visibility is also needed. A true priority not only protects infrastructure, but also enables secure innovation at scale. Chief Information Security Officer Diana Kelley writes a column protecting the AISC media perspective by a trusted community of SC Media Cybersecurity subject matter experts. Each contribution has the goal of bringing a unique voice to key cybersecurity topics. We strive to ensure that our content is of the highest quality, objective and non-commercial.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe future of AI video generation and content creation
Next Article Five ChatGpts Prompt Prompts and Tiktok creators are now viral today
versatileai

Related Posts

Cybersecurity

foxlink and luminys build strategies for smart security and robotics

May 27, 2025
Cybersecurity

Singapore’s HTX is partnering with Google Cloud to boost AI

May 27, 2025
Cybersecurity

Trend Micro offers an enterprise AI security platform across data in the cloud or in-house

May 27, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Don't Miss

Doudna Supercomputer to Strengthen AI and Genomics Research

May 30, 2025

Promote your creativity with new generation media models and tools

May 30, 2025

From California to Kentucky: Tracking the rise of state AI laws in 2025 | White & Case LLP

May 29, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?