Despite record adoption rates last year, interest in artificial intelligence continues to accelerate. AI is increasingly integrated into core business operations, with many leaders looking to use AI to completely rethink their current business processes. Currently, 72% of companies have integrated AI into at least one business function, while 65% have specifically embraced the generated AI. Furthermore, organizations are beginning to develop their own basic models and AI applications. McKinsey reported that 47% of companies have either significantly customized their existing models or developed their own models last year, with many choosing to migrate their AI stacks to the cloud for greater scalability and security controls. Genai is estimated to be responsible for half of the increase in cloud service revenue in 2024. However, in the course of cloud migration, organizations often overlook critical security components. In this article, we explore the key security factors that businesses should prioritize when migrating their AI stack to the cloud.
AI demands security at every layer of the stack
An AI stack refers to the layer of technology that enables an AI system to work, covering everything from computer chips that perform AI tasks to the application itself. The main classes include:
Application Layer: This is the layer that users see and interact with when using AI, whether the model is delivered as a mobile app or a web-based chatbot. Additionally, application programming interfaces (APIs) may connect to larger AI systems to meet user requests. This layer manages and automates the deployment, scaling, and execution of AI workloads across cloud environments. To improve the accuracy and relevance of AI-generated responses, organizations can use Search Augmented Generation (RAG) to ground their own commercial model of data that normally resides in storage accounts. Because AI workloads are highly data- and computationally intensive, data storage must be safe and scalable enough for workloads to access, ingest, process and store data efficiently and effectively. Includes infrastructure components such as virtual machines (VMs), web applications, and containers.
Integrating robust security protections within and across layers of the AI stack creates the foundation for trustworthy AI. However, the data and infrastructure layers tend to be inadequately integrated during the cloud migration. Organizations need vendors that provide both security and cloud capabilities to fill this gap.
Cloud-native approach enables multi-layered AI security
When creating an end-to-end security approach to tailor the demands of the latest AI workloads in the transition, organizations must incorporate a variety of security practices, including vulnerability management, data security, identity and access control, real-time monitoring and threat detection. These functions need to be scaled to handle the performance demands of AI systems, but meet the unique security needs of various AI workloads. There are two types of AI workloads that your organization needs to protect:
Data Processing Workloads: AI environments are rich in data and allow you to create critical database and storage security challenges. Traditional storage protection not only struggles to handle the vast amount of data storage required by AI, but attackers frequently target these data stores with injected and brute-force attacks. Threat actors also use malware to target cloud storage accounts to compromise valuable data.
Security for AI storage, databases, and data processing workloads requires:
Identify and mitigate file-level threats and identify incidents that specifically target sensitive data. Use intelligence to analyze threats to sensitive data and accelerate remediation.
Training and Inference Workload: Security for servers and VMs that run AI tasks is also important. Legacy server security solutions struggle to handle the complexity and demand of AI workloads without slowing down performance.
To enhance AI security and protect your VMS, server security must:
Manage risk and adhere to industry standards with comprehensive oversight and centralized policy enforcement. Respond to attacks in real time while minimizing attitude risks with efficient vulnerability scanning, access control and privilege management, addressing emerging cloud VM threats to defend at the speed of AI training and inference.
A secure, AI-driven migration requires a security strategy for all types of AI workloads. Using an overall end-to-end security platform approach and integrating security with cloud providers will increase efficiency. Cloud-native security doesn’t just provide system-wide visibility and centralized tools that reduce the complexity of AI security without affecting workload performance. It also streamlines the migration process and reduces the siloed management experience. Additionally, integration into each infrastructure layer allows for more intelligent and accurate threat detection and response, including attack path analysis and automated responses. As IT teams struggle to adapt to the evolving AI security needs during the cloud transition, visualizing security needs across multiple layers of the AI stack can promote clarity. To help cloud-native approaches reduce complexity while still allowing for more sophisticated threat detection and response, visit Microsoft Defender in Cloud. Microsoft Security.