According to ManageNENGINE, IT departments are competing to implement AI governance frameworks, but many employees have already opened AI backdoors.
Increased unauthorized AI usage
Shadow AI is quietly penetrating organizations across North America, creating blind spots that even the most cautious IT leaders struggle to detect.
Despite formal guidelines and approved tools, Shadow Al has become the norm rather than an exception. 70% of IT Decision Makers (ITDMS) identified fraudulent use of AI within their organizations.
60% of employees use AI tools that are less approved than they were a year ago, and 93% of employees allow them to enter information into AI tools without approval. 63% of ITDM consider data leakage or exposure to shadow AI as the major risk. Conversely, 91% of employees believe that Shadow AI is riskless or risky, or risk-risking outweighs its rewards.
Note or Call Summary (55%), Brainstorming (55%), and Data or Report Analysis (47%) are top-task employees with Shadow AI. Genai Text Tools (73%), AI Writing Tools (60%) and Code Assistant (59%) are the top AI tools ITDM has approved for employee use.
“Shadow AI represents both the biggest governance risk and the greatest strategic opportunity for the enterprise,” said Ramprakash Ramamoorthy, Director of AI Research at ManageNentine. “A thriving organization is an organization that addresses security threats and reframes Shadow AI as a strategic indicator of true business needs. IT leaders need to move from proactively defending a transparent, collaborative, secure AI ecosystem that their employees feel empowered to use to create a proactive, transparent, secure AI ecosystem.”
Identify shadow AI gaps
To change the use of Shadow AI from responsibility to strategic advantage, IT leaders need to bridge the education, visibility and governance gaps revealed by the report. Specifically, the lack of training AI models, safe user behavior, and education on organizational impacts has encouraged systematic misuse.
The blind spot continues to grow in the organization, despite teams moving to approve and integrate AI tools as quickly as possible. Shadow AI, on the other hand, multiplies due to inadequate enforcement of established governance policies.
85% report that employees adopt AI tools faster than their IT team can rate. Thirty-two percent of employees entered confidential client data into AI tools without confirming company approval, while 37% entered data from private companies.
53% of ITDMS say their employees use personal devices for work-related AI tasks. Only 54% of organizations report that they implement AI governance policies and actively monitor misuse, with 91% implementing policies overall.
The future of AI at work
Proactively managing AI means leveraging employee initiatives while maintaining security. Shadow provides the business value discovered in AI, but it does so through approved AI tools.
63% advise to integrate approved AI tools into standard workflows and business applications, 60% suggest implementing policies on acceptable AI use, and 55% suggest establishing a reviewed and approved tool list.
66% of employees recommend setting fair and practical policies, 63% recommend providing formal tools related to the task, and 60% recommending providing better education to understand risks.
“Shadow AI is a fatal flaw for most organizations,” said Sathish Sagayaraj Joseph, Regional Technology Director at ManagingEngine. “Teams cannot manage risks that cannot be seen and enable business value that users do not leak. Active AI administrators unite IT and business experts to pursue the goals of a typical organization. This means that employees are equipped to understand and avoid AI-related risks, and are useful in using AI in a way that drives real business outcomes.”