A new report from Deloitte has warned that companies are deploying AI agents faster than safety protocols and safeguards can keep up. Therefore, serious concerns regarding security, data privacy, and accountability are prevalent.
Research shows that agent systems move from pilot to production so quickly that traditional risk management designed for more human-centric operations struggles to meet security demands.
Despite increasing adoption rates of AI agents, only 21% of organizations have strict governance or oversight in place for AI agents. 23% of companies say they currently use AI agents, but this is expected to rise to 74% over the next two years. The proportion of companies that have not yet adopted this technology is expected to fall from 25% to just 5% over the same period.
Poor governance poses a threat
Deloitte does not stress that AI agents are inherently dangerous, but says the real risks are related to inappropriate conditions and weak governance. When an agent operates as its own entity, its decisions and actions can easily become opaque. Without robust governance, it becomes difficult to manage and almost impossible to prevent mistakes.
According to Ali Sarrafi, CEO and founder of Kovant, the answer is governed autonomy. “A well-designed agent with clear boundaries, policies, and definitions is managed the same way a business is managed, allowing employees to move quickly on low-risk work within clear guardrails, but escalating to a human if an action crosses a defined risk threshold.”
“With detailed action logging, observability, and human gatekeeping for high-impact decisions, agents stop being mysterious bots and become systems that can be inspected, audited, and trusted.”
As Deloitte’s report suggests, the adoption of AI agents is set to accelerate in the coming years, and it will not be the earliest adopters, but only those that deploy the technology with visibility and control that will have an advantage over their competitors.
Why AI agents need robust guardrails
AI agents may work well in controlled demos, but they struggle in real-world business environments where systems are fragmented and data can be inconsistent.
Sarrafi commented on the unpredictable nature of AI agents in these scenarios. “When agents are given too much context or scope at once, they are prone to hallucinations and unpredictable behavior.”
“In contrast, production-grade systems limit the range of decisions and contexts in which the model operates. The system breaks down operations into narrower focused tasks for individual agents, making behavior more predictable and easier to control. This structure also allows for traceability and intervention, so failures can be detected early and escalated appropriately rather than causing cascading errors.”
Accountability for Insurable AI
When agents perform real actions within business systems, including maintaining detailed action logs, the way you look at risk and compliance changes. Once all actions are recorded, agent activity becomes clear and evaluable, allowing organizations to closely examine actions.
This kind of transparency is critical for insurers who are reluctant to cover opaque AI systems. This level of detail helps insurers understand what agents have done and the controls involved, making it easier to assess risk. Human oversight of risk-critical actions and auditable, repeatable workflows enable organizations to create more manageable systems for risk assessment.
AAIF standards are a good first step
Shared standards, such as those being developed by the Agentic AI Foundation (AAIF), can help enterprises integrate disparate agent systems, but current standards efforts are focused on what is easiest to build, rather than what large organizations need to securely operate agent systems.
Sarrafi said businesses need standards to support operational management, including “permissions, approval workflows for high-impact actions, and auditable logging and observability to enable teams to monitor behavior, investigate incidents, and demonstrate compliance.”
Identity and authority are the first line of defense
To ensure safety in real-world business environments, it’s important to limit what an AI agent can access and what actions it can take. “When agents are given broad privileges or given too much context, they become unpredictable and create security and compliance risks,” Sarrafi said.
Visibility and monitoring are important to keep agents operating within limits. Only then can stakeholders have confidence in implementing the technology. When all actions are logged and manageable, your team can see what happened, identify issues, and better understand why events occurred.
Sarrafi added: “This visibility, combined with human oversight where it matters, transforms AI agents from arcane components to systems that can be inspected, replayed and audited. It also enables rapid investigation and remediation when issues arise, increasing trust among operators, risk teams and insurers alike.”
Deloitte’s blueprint
Deloitte’s secure AI agent governance strategy defines the boundaries of the decisions that agent systems can make. For example, an agent may operate with graded autonomy where it can only display or suggest information. From here, limited actions can only be performed with human approval. If they prove reliable in low-risk areas, they will be allowed to operate automatically.
Deloitte’s Cyber AI Blueprint suggests building a governance layer and a roadmap of policy and compliance capabilities into your organization’s management. At the end of the day, the safe use of agent AI relies on governance structures that track AI usage and risks and embed oversight into daily operations.
Providing training to employees is another aspect of secure governance. Deloitte recommends training employees on what not to share with AI systems, what to do if an agent goes off track, and how to identify abnormal and potentially dangerous behavior. If employees don’t understand how AI systems work and their potential risks, security controls can be weakened, even if it’s not intentional.
Robust governance and controls and shared literacy are fundamental to the secure deployment and operation of AI agents, enabling safe, compliant, and accountable performance in real-world environments.
(Image source: “Global Hawk, NASA’s New Remote-Controlled Plane” by NASA Goddard Photo and Video is licensed under CC BY 2.0.)
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expos in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events. Click here for more information.
AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

