The explosive growth of generated AI has created an unprecedented security challenge for businesses. New research has seen an astounding 3,000% increase in just one year, and organizations are now sharing AI tools around 7.7GB of sensitive data each month. More concerning, about 8.5% of employees have prompts in large language models (LLMs), including sensitive information that could put an organization at risk.
This dramatic change in how data flows through the corporate environment is set against the backdrop of increasingly devastating data breaches. The recently published top 11 data breaches in 2024 reveal a worrying evolution in the situation of data breach, with financial services overtaking healthcare as the most targeted sector, reaching unprecedented levels of compromise.
Explosion of AI adoption curve
Recent research has recorded an extraordinary growth of over 3,000% year-on-year in corporate use of AI/ML tools across the industry. This is more than just an experimental adoption. Organisation deeply integrated these technologies into core operations, and employees incorporated AI into their daily workflows to drive productivity, efficiency and innovation.
Companies are walking the increasingly narrow tightrope between AI innovation and security. This metaphor properly captures how to maintain robust security controls without suppressing the competitive benefits of AI. This unbalanced organization risks falling behind its competitors or suffering catastrophic violations.
New frontiers of data risk
The 2024 Violation Scenes showed concern about acceleration in both frequency and shock compared to past years. The organization reported 4,876 cases of violations to regulators, representing a 22% increase from the 2023 figure. What was even more concerning was the dramatic increase in the volume of compromised records, up 178% year-on-year to a record of 4.2 billion.
This massive exposure scale has occurred, but companies have rapidly adopted AI tools, creating a complete storm of security challenges. National public data breaches have published 2.9 billion records, demonstrating how data aggregation creates centralized risk points where a single security obstacle can have global outcomes.
What particularly striking the AI security crisis is that these tools are designed to ingest, process and generate content based on vast amounts of information. If employees provide sensitive data to these systems, whether intentionally or accidentally, the potential impact is exponentially greater than that of traditional data breaches vectors.
Important insights from major violations
The Kiteworks report provides some important findings that inform you of your understanding of the AI security crisis. First, data sensitivity emerged as the most influential factor (24%) in determining the severity of violations, overtaking even the number of exposed records. This suggests that stolen items are more important than how much they were taken. This is an important consideration when organizations routinely share high-quality, sensitive data with AI systems.
Some violations with high supply chain impact scores include public national data (8.5) and hot topics (8.2). National Public Data’s aggregation business model has created a single point of failure that affects thousands of downstream data consumers. In contrast, Hot Topic’s MageCart attack, which utilizes third-party JavaScript libraries, affected many connected retail partners and payment processors.
This pattern reveals awkward parallel with AI security concerns. This can result in third-party AI providers becoming a single point of failure in your organization’s security architecture. When sensitive data is shared with external AI systems, organizations effectively extend security perimeters to include these third-party providers, creating new vectors for potential breaches.
Correlation between the sophistication of the attack and the severity of the violation is also taken into consideration. The most sophisticated attacks demonstrated multiple advanced traits, including advanced sustainability technology, zero-day exploitation and advances in social engineering. These social attacks have evolved beyond the general phishing email and are characterized by persuasive spoofing, psychological manipulation and technical bypasses of advanced authentication systems.

