Commentary: AI isn’t waiting for the security team to catch up. The recent security issue, which Wiz researchers have come across news surrounding Deepseek, revealing a wide range of vulnerabilities, including exposed databases, weak encryption and tax sensitivity of AI models, is where they have revealed a wide range of vulnerabilities, including AI-Model Jailbreaking, as an overview of organizations adopting the adoption of SC Commitas of Commissions of Scearions of Scerations of Scerations of Ai Media, reveals that they will adopt SC media. Experts in cybersecurity subjects. When Wiz discovered a public Clickhouse database that contains sensitive chat history and API secrets, it revealed more than technical surveillance for Deepseek. This revealed the fundamental gaps in how AI systems are protected. Beyond exposed databases, the SecurityScoreCard strike team has identified outdated encryption algorithms and weak data protection mechanisms. Researchers have discovered a SQL injection vulnerability that could provide attackers with unauthorized access to user records. In most cases, the DeepSeek-R1 model showed an astonishing failure rate in security testing. This is 91% for jailbreaking and 86% for rapid injection attacks. Deepseek is news, but the AI threat is not extraordinary. It is a coal mine canary, warning about the security challenges associated with the rapid adoption of AI. The company’s practice of collecting user input, keystroke patterns, and device data highlights the complex data privacy implications of AI deployments.
Data exposure and privacy: Organizations face significant risks from unauthorized access to sensitive user data, such as chat history and personal information. Collection of keystroke patterns and device data creates additional privacy concerns, especially when this information is stored in jurisdictions with weak privacy protections. AI Model Vulnerabilities: Testing reveals critical weaknesses in AI Model Security. These vulnerabilities allow attackers to manipulate model output and extract sensitive information. Infrastructure security: Encryption practices and outdated encryption algorithms can undermine the security of your entire system. The SQL injection vulnerability provides potential access to rogue database content, but insufficient system segmentation allows lateral movement within the connected network. This creates a great competitive risk as attackers could potentially steal or reverse engineer core AI technology. The harshness of these risks has prompted major institutions such as the US Navy, the Pentagon and New York to ban deepscasing due to concerns about “shadow AI.” It highlights how intellectual property vulnerabilities can impact broader security policies. Regulatory compliance: Organizations need to navigate complex data protection regulations such as the GDPR and CCPA. While security breaches can result in substantial fines and legal liabilities, cross-border data transfers create additional compliance challenges. Supply Chain Threats: Third-party AI components and development tools introduce potential backdoors and vulnerabilities. Organizations face major challenges when examining the security of their dependent external AI models and services.
While the AI security landscape may seem daunting, while controlling the security of AI, organizations are not helpless. Develop a comprehensive exposure management strategy before deploying AI technology. From our experience working with companies across the industry, here are some key elements of an effective programme:
Focus on external exposure: With over 80% of violations involving external actors, organizations need to prioritize external attack surfaces. This means continually monitoring the assets aimed at the Internet, particularly the infrastructure associated with AI endpoints. This includes integrations of cloud services, on-premises systems, and third-party. AI systems often have complex dependencies that create unexpected exposure points. All: Implement continuous security testing on all exposed assets, not just those considered important. This includes regular application security assessments, penetration testing, and AI-specific security assessments. Traditional “crown jewels” approach critical vulnerabilities that have been missed in seemingly low-priority systems. Prioritize based on risk: Assess threats based on their potential business impact, not just technical severity. Consider factors such as data sensitivity, operational dependencies, and the implications of potential adjustments when prioritizing repair efforts. Share integrates exposure management into existing security processes through automation and clear communication channels. Ensure that your findings are shared with relevant stakeholders and are fed into a broader security operation and incident response process.
The Deepseek case serves as a critical wake-up call for organizations to compete for implementing AI technology. As AI systems become increasingly integrated into core business operations, the security impacts far exceed traditional cybersecurity concerns. Organizations need to recognize that AI security requires a fundamentally different approach. It combines robust technical control with a comprehensive exposure management strategy. The rapid pace of AI advancement means security teams can’t afford to keep up. Instead, teams need to build security considerations on AI initiatives from scratch, and continuous monitoring and testing becomes the standard practice. It’s simply too expensive to treat AI security as an afterthought. This requires action now to implement a comprehensive exposure management programme that addresses the unique challenges of AI security. Those who fail to do so could violate their data and cause catastrophic damage to their operations and reputation as well as regulatory penalties. In the evolving context of AI technology, security cannot be considered an option. AI Systems.Glaham Rance, Global PreSales, Cycognitosc Media Perspectives column is written by a trusted community of subject matter experts in SC Media Cybersecurity, Graham Rance. Each contribution has the goal of bringing a unique voice to key cybersecurity topics. We strive to ensure that our content is of the highest quality, objective and non-commercial.