For the past three years, Geneai has dominated the conversation about enterprise technology. But today, the spotlight has moved to Agent AI. Innovations that promise to achieve even greater efficiency and automation.
Possibility of Agent AI
Unlike traditional software that executes predefined instructions, agent systems make adaptive, autonomous decisions based on inference. As a result, Agent AI can automate complex enterprise systems, interact directly with customers, and also learn from data to adapt to information changes.
“Agent AI is creating more autonomy. You can build systems and assign tasks that these agent systems can perform actions. They can plan, infer, and execute complex workflows.
Scaling Agent AI Challenges
However, the promise of automation and efficiency improvements in agent AI can only be realized with responsibility and with caution to regulatory and ethical obligations and the need for robust cybersecurity.
In a new white paper, “Secure, Governance, and Business Response: Trusted and Confident Scaling Agent AI,” PWC argues that Agent AI must function within a well-defined ethical and governance framework. Note some important considerations:
Data governance. Autonomy without strong data governance creates new risks. High-quality data combined with embedded guardrails and safeguards is essential to ensuring the safe and resilient adoption of AI agents. skill. Companies need skilled employees to understand where agent systems are deployed, when to hand over workloads to humans, and the ethical requirements needed for responsible deployment. compliance. AI tools can cause errors, and there is also the risk of leaking sensitive data, especially when companies connect their agent systems to data sources without proper control. Increasingly, these controls will be mandated through regulations such as the EU’s AI law and Dora. Cybersecurity. Cyberthreat actors are already using agent AI to scale and reduce operations. For example, in 2025, social engineering for early access, we saw the widespread use of AI-generated email, audio and video by ransomware and BEC groups. (1) Resilience. If agents are not strictly constrained, they may access the tool beyond their transfers or initiate unintended actions. Even minor misconceptions can cascade into serious downstream consequences that threaten operational resilience.
Build large-scale trust
To overcome these challenges, CISOS must first look at the governance framework, including AI policies, to see if agent AI is ready.
“AI agents security and governance are not options. They should be included from the start,” recommends Narayan Kumar Gupta, senior manager of PWC Ireland, lead of the Global Microsoft Security Association.
This includes introducing agent technology with low-risk proof of concept, using GuardRails to control the behavior of AI agents, especially when processing sensitive data, and using humans to loops to continue monitoring.
CISOS can investigate the possibilities of Microsoft’s AI and security ecosystem to support secure agent deployments. This includes, among other tools, autogon for Azure AI Agent service in Azure AI Foundry for tailored experiments.
Use agent AI for growth
Agent AI is accelerating digital transformation, empowering pioneers to rebuild their industry and broaden their customer impact. But success doesn’t come from innovation alone. It also depends on preparation. While there are some foundations in place through toolsets and frameworks from partners such as PWC and Microsoft, companies need to ensure that their teams are fully equipped to unlock the potential of Agent AI while constantly adhering to best security and governance practices.
Download PWC’s new whitepaper and learn more about Scaling Agent AI with confidence.
PWC: This content is for general information purposes only and should not be used as a substitute for consultation with a professional advisor.
©2025 PWC. Unauthorized reproduction is prohibited.
(1) PWC, “Cyber Threat 2024: A Look Back Year”, 2024

