Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Ionos announces strategic investment in cloud analogy to enhance AI capabilities, Global Salesforce

July 7, 2025

Welcome Gemma – Google’s new open LLM

July 7, 2025

Fine-tuned Gemma model to hug your face

July 7, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, July 7
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»AI tools are everywhere, most of them off the radar
Cybersecurity

AI tools are everywhere, most of them off the radar

versatileaiBy versatileaiJuly 3, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

According to Zluri’s The State of AI Workplace 2025 report, 80% of the AI ​​tools used by employees are not managed by IT or security teams. AI is manifesting throughout the workplace, but in many cases, no one notices it. If you are a CISO, if you want to avoid blind spots and data risks, you need to know where your AI is displayed and what it is doing throughout your organization.

What’s going on and why is it important?

Organizations use dozens, sometimes hundreds of AI tools in different teams. These tools appear in marketing, sales, engineering, HR, and operations. But most security teams know about less than 20% of them. Employees often try their own AI apps without approval or supervision. This leads to Shadow AI to tools that work outside of IT and security knowledge.

When AI systems interact with sensitive internal data or produce output that affects business decisions, their lack of monitoring becomes dangerous. Unconfirmed tools may connect to unknown vendors, store data on public servers, or share input and output without encryption or audit trail. As AI becomes integrated into daily tasks, risk compounds become compounds.

Risk CISOs must be treated as important

Data leaks are one of the most pressing threats. AI tools that are not managed by IT can process sensitive internal information and be exposed to the outside world. In a regulated industry, this can easily lead to non-compliance, especially when protected health data, financial information, or customer records are involved. Another concern is the access sprawl.

AI platforms often create service accounts and connect via API keys. If you don’t have a central inventory, you can easily track these credentials. This increases the attack surface and no audit trail. If the data is misused or leaked, then if there is no log about how it happened, a response will be nearly impossible.

What CISO can do

Get visibility with all AI tools
Invest in discovery platforms that scan your network, identity systems, and SAAS usage to find the AI ​​tools you are using. Groups by risk level
Classify tools based on data access. High sensitivity access ensures greater scrutiny. Build a policy accordingly. Enforce minimal privileges
Limits the rights to AI applications. Audit API keys, centrally manage service accounts, and cancel unused tokens. Integrate into a governance framework
Add AI tools to your asset inventory. Like SaaS applications, security reviews are required before approval. Adopt real-time alerts
Use risk scoring tied to unusual AI usage patterns. Flag a sensitive document if it is uploaded to an unknown model. Educate employees
Shadow AI grows fastest when staff doesn’t realize it’s a security threat. Run awareness campaigns and set clear usage policies.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAre 90% not ready for AI attacks?
Next Article Integration 2025 returns to Sydney with an extension of the AV program
versatileai

Related Posts

Cybersecurity

Changing the name of the US AI Safety Institute is about priorities, not semantics.

July 3, 2025
Cybersecurity

Only 8% of Indian organizations prepare for AI cyber threats

July 3, 2025
Cybersecurity

Integration 2025 returns to Sydney with an extension of the AV program

July 3, 2025
Add A Comment

Comments are closed.

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

SK Telecom unveils cutting-edge AI innovations at CES 2025

January 13, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

SK Telecom unveils cutting-edge AI innovations at CES 2025

January 13, 20251 Views
Don't Miss

Ionos announces strategic investment in cloud analogy to enhance AI capabilities, Global Salesforce

July 7, 2025

Welcome Gemma – Google’s new open LLM

July 7, 2025

Fine-tuned Gemma model to hug your face

July 7, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?