Many organizations in Asia Pacific have spent the past 18 months experimenting with AI. Looking ahead, it is clear that we are now moving from pilot to implementation as technology leaders begin to incorporate AI-enabled capabilities to accelerate growth. According to IDC, global spending on technology supporting AI is expected to exceed $749 billion by 2028. Notably, IDC reports that 67% of the $227 billion in projected AI spending in 2025 will come from companies embedding AI capabilities into their core operations, outpacing corporate investments. As investment in artificial intelligence (AI) solutions, particularly generative AI (GenAI) tools such as Microsoft Copilot, accelerates, the need for more resilient solutions is increasing. Security frameworks will also be important. Rather than simply extending current cybersecurity practices to accommodate new AI technologies, security teams must first assess the associated risks of using these tools. This means identifying new network vulnerabilities and required additional actions without putting the brakes on innovation.
Zero Trust is a fundamental approach to cybersecurity that has emerged as a key strategy for protecting sensitive data amid changing permissions and increasing threats. But when it comes to the latest wave of GenAI tools and assistants, is a zero trust approach still enough in a threat landscape where co-pilots are increasingly enabled?
Understanding Zero Trust Security
As security leaders know, Zero Trust prioritizes the security of data, systems, and assets by granting access only when necessary. This approach is not just theoretical. It is being actively implemented in a variety of industry sectors, reflecting the growing awareness of the need for adaptive security measures in an increasingly complex digital environment.
If done correctly, the benefits of Zero Trust can be significant. According to Forrester, adopting a zero trust approach fosters growth by strengthening brand trust, accelerating alignment with emerging technologies, and improving customer and employee experiences. It also helps bridge the gap between CISOs and those advocating for greater investment and innovation in AI. As Forrester points out, “With Zero Trust, security becomes a business amplifier, transforming the CISO from an organization’s most senior figure to a sought-after supporter.”
However, Zero Trust is not without potential pitfalls. First, implementing this security framework is not an easy task. Organizations must comprehensively re-evaluate access management, user validation, and system architecture to establish new policies and protocols. The introduction of GenAI tools further complicates this transition by requiring a more complex approach to security.
AI Challenges: The Risks of Generative AI
While Zero Trust provides a solid framework, the rise of GenAI tools introduces new security issues. AI assistants designed to improve productivity can inadvertently expose organizations to higher risks, including data breaches, unauthorized access, and misuse of sensitive information. Many organizations do not fully understand that overly permissive data access greatly increases the likelihood that cybercriminals will infiltrate sensitive systems, and that CISOs must mitigate the impact.
As Thomson Reuters pointed out, “The proliferation of GenAI and large language models (LLMs) creates an even more complex web of liability, such as jailbreaking and prompt injection attacks, which compromises sanctuaries of privacy and makes malicious The door is open for someone to wreak havoc, exploit weaknesses and expose personal data.”
According to Vectra AI’s 2024 Threat Detection and Response research report, The Defender’s Dilemma, approximately 54% of SOC personnel in APAC believe that security vendors are sending pointless alerts to avoid liability for breaches. 45% express distrust of security vendor tools. Works as needed. This highlights the urgent need for effective security measures to address the challenges posed by overwhelming alerts and lack of trust in GenAI systems.
Organizations need to scrutinize not only who has access to their data, but also what AI systems like Microsoft Copilot can access and how they manage this information flow.
Integrate Zero Trust principles into your AI framework
To effectively address the complexities of AI security, organizations can tailor their Zero Trust principles specifically for GenAI. This requires a multifaceted approach that incorporates architectural design, data management, and strict access controls. The main considerations for this framework are:
Authentication and Authorization: Implement robust user verification processes and limit access to the minimum necessary. This principle applies equally to AI systems, which must undergo rigorous identity verification before accessing sensitive data.
Validate data sources: Organizations must validate the sources from which their AI systems collect information. This protects data integrity and reduces risks associated with data manipulation and misuse.
Process monitoring: Continuous monitoring of AI processes is essential to identify anomalies and potential security breaches. Maintaining surveillance allows organizations to detect abnormal behavior and respond quickly.
Output screening: Implement mechanisms to scrutinize the output produced by your AI system. This prevents the spread of sensitive information or malicious content.
Activity Audit: Regularly audit the activity of your AI systems to maintain accountability and transparency. These audits are essential to understanding how data is accessed and utilized in a GenAI environment.
By focusing on these principles, organizations can build a security posture that addresses the unique challenges posed by GenAI. Content-layer security has emerged as a critical factor, going beyond traditional access controls to assess what data AI systems can access, process, and share.
The way forward in the world of AI
As digital innovation continues to evolve with the integration of AI technologies, the need for a robust security framework cannot be overstated. While Zero Trust security provides a strong foundation, its principles must be adapted to the complexity introduced by GenAI.
By taking a proactive, data-centric approach, organizations can strengthen their security posture and protect sensitive information from an ever-evolving range of threats. In this era of digital transformation, vigilance and innovation in security practices are not only beneficial, but essential to protect the integrity and trust of your organization.
Sharat Nautiyal is Director of Security Engineering at Vectra AI, a cybersecurity company specializing in AI-powered threat detection and response solutions. The views in this article are personal and do not represent the views of the organization.