According to Zluri’s The State of AI Workplace 2025 report, 80% of the AI tools used by employees are not managed by IT or security teams. AI is manifesting throughout the workplace, but in many cases, no one notices it. If you are a CISO, if you want to avoid blind spots and data risks, you need to know where your AI is displayed and what it is doing throughout your organization.
What’s going on and why is it important?
Organizations use dozens, sometimes hundreds of AI tools in different teams. These tools appear in marketing, sales, engineering, HR, and operations. But most security teams know about less than 20% of them. Employees often try their own AI apps without approval or supervision. This leads to Shadow AI to tools that work outside of IT and security knowledge.
When AI systems interact with sensitive internal data or produce output that affects business decisions, their lack of monitoring becomes dangerous. Unconfirmed tools may connect to unknown vendors, store data on public servers, or share input and output without encryption or audit trail. As AI becomes integrated into daily tasks, risk compounds become compounds.
Risk CISOs must be treated as important
Data leaks are one of the most pressing threats. AI tools that are not managed by IT can process sensitive internal information and be exposed to the outside world. In a regulated industry, this can easily lead to non-compliance, especially when protected health data, financial information, or customer records are involved. Another concern is the access sprawl.
AI platforms often create service accounts and connect via API keys. If you don’t have a central inventory, you can easily track these credentials. This increases the attack surface and no audit trail. If the data is misused or leaked, then if there is no log about how it happened, a response will be nearly impossible.
What CISO can do
Get visibility with all AI tools
Invest in discovery platforms that scan your network, identity systems, and SAAS usage to find the AI tools you are using. Groups by risk level
Classify tools based on data access. High sensitivity access ensures greater scrutiny. Build a policy accordingly. Enforce minimal privileges
Limits the rights to AI applications. Audit API keys, centrally manage service accounts, and cancel unused tokens. Integrate into a governance framework
Add AI tools to your asset inventory. Like SaaS applications, security reviews are required before approval. Adopt real-time alerts
Use risk scoring tied to unusual AI usage patterns. Flag a sensitive document if it is uploaded to an unknown model. Educate employees
Shadow AI grows fastest when staff doesn’t realize it’s a security threat. Run awareness campaigns and set clear usage policies.