Despite the rising prices of artificial intelligence in the workplace, most organizations remain unprepared to manage risk, according to Isaca’s annual AI Pulse Poll, which surveyed 3,029 digital trust experts around the world.
A 2025 poll revealed that 81% of respondents believe that employees within the organization are using AI, whether or not they are permitted to use them, but only 28% of organizations have a formal AI policy.
Surprisingly, while only 22% of organizations provide AI training to all staff, 89% of high-tech experts say they will need AI training within the next two years to advance their careers or will maintain their current role.
The disconnect between widespread adoption of AI and lagging surveillance creates growing risks, particularly in the face of escalating threats like deepfakes. In fact, 66% of experts hope that Deepfake cyberattacks will become more refined within the next 12 months, but now only 21% of organizations are investing in tools to detect or mitigate them.
ISACA Board Director Jamie Norton said as more employees embrace AI tools and become more efficient, the lack of formal policies and AI-specific cybersecurity measures will make organizations more vulnerable to manipulation, reputational harm and data breaches.
“Although AI is already built into daily workflows, Isaca’s polls confirm that there is a huge lack of governance, policy and risk oversight,” Norton said. “AI-skilled security workers are absolutely crucial to tackling the wide range of risks posed by AI, from misinformation and deepfakes to data misuse.
“AI is more than just a technical tool. It changes the way decisions are made, how data is used, and how people interact with information,” he added. “Leaders must now take action to establish the frameworks, safeguards and training needed to support responsible AI use.”
68% of respondents said that using AI has saved them and their organization time, with half (56%) expecting AI to have a positive impact on their careers next year.
This technology is used in a variety of ways, including:
Create written content (52%)
Improve productivity (51%)
Automate repetitive tasks (40%)
Analyzing large amounts of data (38%)
Customer Service (33%)
The journey is being made through AI policies and training, but there are still ways to go. Only 28% of organizations implementing official, comprehensive AI policies (from 15% last year). 59% of organizations say they allow the use of generated AI (up from 42% last year), while 32% of respondents say they have not provided AI training to their employees.
Also, while many people use AI, they don’t fully understand everything. 56% say they are somewhat familiar with AI. Only 6% are very familiar, and 28% consider themselves very familiar.
61% are very or very worried that generative AI will be exploited by bad actors, while 59% believe it is more difficult to detect AI-powered phishing and social engineering attacks.
Furthermore, only 41% believe they are properly addressing ethical concerns in AI deployment, such as data privacy, bias, and accountability. Also, only 30% are highly confident in their ability to detect AI-related misinformation.
Only 42% of respondents say AI risk is an immediate priority for an organization, including these top priorities cited by AI.
False information/false information (80%);
Privacy violations (69%);
Social Engineering (63%);
IP loss (53%); and
Job displacement (40%).
However, respondents recognize the importance of AI skills. Almost a third say organizations are increasing employment for AI-related functions over the next 12 months. Additionally, 85% of respondents say they agree or strongly agree that many employment will be changed due to AI.
84% of digital trust experts believe they have beginner or intermediate level expertise in AI, while 72% believe that AI skills are very important or very important to experts in the current field. 89% say they will need AI training within the next two years to advance their careers.