Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Piclumen Primo AI Model Debut: Next Generation Cyberpunk Image Generation for the Creative Industry | AI News Details

July 14, 2025

People are beginning to sound like AI, research shows

July 13, 2025

Pixverse AI Platform Solves Content Creation Challenges with Powerful AIGC Tools in 2024 | AI News Details

July 13, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, July 14
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»Employees quietly bring AI to work and leave security behind
Cybersecurity

Employees quietly bring AI to work and leave security behind

versatileaiBy versatileaiJuly 11, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

According to ManageNENGINE, IT departments are competing to implement AI governance frameworks, but many employees have already opened AI backdoors.

Increased unauthorized AI usage

Shadow AI is quietly penetrating organizations across North America, creating blind spots that even the most cautious IT leaders struggle to detect.

Despite formal guidelines and approved tools, Shadow Al has become the norm rather than an exception. 70% of IT Decision Makers (ITDMS) identified fraudulent use of AI within their organizations.

60% of employees use AI tools that are less approved than they were a year ago, and 93% of employees allow them to enter information into AI tools without approval. 63% of ITDM consider data leakage or exposure to shadow AI as the major risk. Conversely, 91% of employees believe that Shadow AI is riskless or risky, or risk-risking outweighs its rewards.

Note or Call Summary (55%), Brainstorming (55%), and Data or Report Analysis (47%) are top-task employees with Shadow AI. Genai Text Tools (73%), AI Writing Tools (60%) and Code Assistant (59%) are the top AI tools ITDM has approved for employee use.

“Shadow AI represents both the biggest governance risk and the greatest strategic opportunity for the enterprise,” said Ramprakash Ramamoorthy, Director of AI Research at ManageNentine. “A thriving organization is an organization that addresses security threats and reframes Shadow AI as a strategic indicator of true business needs. IT leaders need to move from proactively defending a transparent, collaborative, secure AI ecosystem that their employees feel empowered to use to create a proactive, transparent, secure AI ecosystem.”

Identify shadow AI gaps

To change the use of Shadow AI from responsibility to strategic advantage, IT leaders need to bridge the education, visibility and governance gaps revealed by the report. Specifically, the lack of training AI models, safe user behavior, and education on organizational impacts has encouraged systematic misuse.

The blind spot continues to grow in the organization, despite teams moving to approve and integrate AI tools as quickly as possible. Shadow AI, on the other hand, multiplies due to inadequate enforcement of established governance policies.

85% report that employees adopt AI tools faster than their IT team can rate. Thirty-two percent of employees entered confidential client data into AI tools without confirming company approval, while 37% entered data from private companies.

53% of ITDMS say their employees use personal devices for work-related AI tasks. Only 54% of organizations report that they implement AI governance policies and actively monitor misuse, with 91% implementing policies overall.

The future of AI at work

Proactively managing AI means leveraging employee initiatives while maintaining security. Shadow provides the business value discovered in AI, but it does so through approved AI tools.

63% advise to integrate approved AI tools into standard workflows and business applications, 60% suggest implementing policies on acceptable AI use, and 55% suggest establishing a reviewed and approved tool list.

66% of employees recommend setting fair and practical policies, 63% recommend providing formal tools related to the task, and 60% recommending providing better education to understand risks.

“Shadow AI is a fatal flaw for most organizations,” said Sathish Sagayaraj Joseph, Regional Technology Director at ManagingEngine. “Teams cannot manage risks that cannot be seen and enable business value that users do not leak. Active AI administrators unite IT and business experts to pursue the goals of a typical organization. This means that employees are equipped to understand and avoid AI-related risks, and are useful in using AI in a way that drives real business outcomes.”

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGoogle’s open Medgemma AI model could transform healthcare
Next Article Hexaware, Abluva partners provide secure AI solutions for life sciences
versatileai

Related Posts

Cybersecurity

Data and AI Status: Security and Privacy

July 12, 2025
Cybersecurity

ACENTURE, Microsoft Partners tackle cyber threats with AI

July 11, 2025
Cybersecurity

Hexaware, Abluva partners provide secure AI solutions for life sciences

July 11, 2025
Add A Comment

Comments are closed.

Top Posts

Data and AI Status: Security and Privacy

July 12, 20251 Views

Leading the Korean LLM evaluation ecosystem

July 8, 20251 Views

Introducing the Red Team Resistance Leaderboard

July 6, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Data and AI Status: Security and Privacy

July 12, 20251 Views

Leading the Korean LLM evaluation ecosystem

July 8, 20251 Views

Introducing the Red Team Resistance Leaderboard

July 6, 20251 Views
Don't Miss

Piclumen Primo AI Model Debut: Next Generation Cyberpunk Image Generation for the Creative Industry | AI News Details

July 14, 2025

People are beginning to sound like AI, research shows

July 13, 2025

Pixverse AI Platform Solves Content Creation Challenges with Powerful AIGC Tools in 2024 | AI News Details

July 13, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?