killer whale security
Jason Patterson, Sr. WW Security PSA – AWS
by Deborah Galea, Product Marketing Director – Orca Security
The integration of artificial intelligence (AI) technology is gaining momentum across a variety of industries, bringing a variety of benefits to businesses. The AI software market is expected to grow 19.1% annually, reaching $298 billion by 2027, according to Gartner Software Forecasts. Amazon Bedrock and Amazon SageMaker, two major AI services offered by AWS, are popular due to their wide popularity.
However, it is important for organizations to implement robust AI security measures to mitigate potential risks such as model poisoning and sensitive data breaches. While these risks are common to cloud-based assets, others are specific to AI models and their deployment.
This blog provides insight into the AI risks that organizations should be aware of and provides guidance on how Orca Security provides effective strategies to mitigate and prevent these potential threats. .
What risks are there with AI services?
Risks to AI models and services such as Amazon Bedrock and Amazon SageMaker are related to the data used to train the models. If an attacker can access and tamper with the training data, they can affect the output. If the data used to train the model contains sensitive data, a malicious party can also manipulate the model and feed it this data.
As shown in Figure 1, Orca uses patented SideScanning™ technology to read AWS workloads without the need for an agent. Once integrated into your Amazon Web Services (AWS) environment with read-only permissions, Orca begins scanning all AWS asset configurations, data storage, IAM resource configurations, network layouts, and security settings. This provides visibility into the specific configurations used by Amazon SageMaker and Amazon Bedrock, as well as data storage, data pipelines, users, and roles used within your AI workloads.
Figure 1: Orca uses AWS APIs and snapshot scans to generate a comprehensive view
The Open Web Application Security Project (OWASP) Foundation has published the OWASP Machine Learning Security Top Ten list, which provides an overview of the top 10 security issues in machine learning systems. Additionally, Orca Security’s recent 2024 State of AI Security Report provides insight into these risks and their prevalence in today’s cloud operating environments.
Below is an explanation of the AI-specific risk terminology covered in this blog.
Prompt injection: When an attacker enters malicious prompts into a large-scale language model (LLM) and manipulates the LLM to perform unintended actions, such as leaking sensitive data or spreading misinformation. Data poisoning: Malicious actors intentionally corrupt AI training datasets and machine learning models to reduce the accuracy of LLM output. Model poisoning: When an attacker manipulates an AI model to introduce vulnerabilities, biases, or backdoors that can compromise the model’s security, effectiveness, or ethical behavior. Model inversion: This occurs when an attacker reconstructs sensitive information or original training data from the output of the model.
For more information on the different types of AI risks, the OWASP Foundation publishes the OWASP Machine Learning Security Top 10 List, which provides an overview of the top 10 security issues for machine learning systems.
Orca Security’s recent 2024 State of AI Security report also provides the insights you need to better understand these risks and how prevalent they are already in today’s cloud operating environments. .
AI security challenges
So what are the top five challenges in keeping AI models and data secure?
Pace of innovation: The speed of AI development continues to accelerate, and AI innovations are introducing features that promote ease of use over security. Shadow AI: Security teams don’t always know which AI models are being used and can’t detect shadow AI. Datasets from many sources: Training datasets can be aggregated from many sources, some of which will be public and others private, so all datasets should be combined with the most sensitive data source set. The same level of security precautions (which may be prohibitive) are required. Early technology: AI security is in its infancy and lacks comprehensive resources and experienced experts. Organizations often need to develop their own solutions to secure AI services without external guidance or examples. Resource control: Deploying new services often involves misconfiguring resources. Users often overlook securely configuring settings related to roles, buckets, users, and other assets, creating risks to their environments.
AI security posture management (AI-SPM)
AI-SPM is a new category of solutions that helps organizations protect their machine learning (ML) and AI systems, models, packages, and data infrastructure.
AI-SPM solutions detect risks applicable to other AWS cloud assets, such as misconfigurations, excessive privileges, insecure secrets, and internet exposure. However, AI-SPM solutions also cover use cases specific to AI security that can lead to unintended exposure through legitimate use of AI services, such as detecting sensitive data in training sets.
AI-SPM also helps ensure compliance with regulatory obligations and industry standards. This includes ongoing compliance monitoring and reporting.
How does AI-SPM work?
The first step of AI-SPM is to discover all AI deployments in your AWS environment. This includes a detailed inventory of all AI projects, AI models, and AI packages used by Amazon Bedrock and Amazon Sagemaker.
Then, detect risks that compromise your AI models and prioritize them according to likelihood of compromise and potential business impact.
Finally, AI-SPM solutions provide remediation options to reduce risk.
AI-SPM solutions perform risk detection throughout the software development lifecycle, allowing you to address issues early in development before they reach production.
About Orca AI-SPM on AWS
Orca Security’s AI-SPM feature uses patented agentless SideScanning™ technology to provide AI models with the same visibility, risk insights, and deep data as AWS resources, while supporting unique AI use cases. It also corresponds to
Orca’s AI-SPM solution covers over 50 AI models and packages used within Amazon Bedrock and Amazon SageMaker, allowing you to confidently build AI-enabled solutions while maintaining visibility and security of your technology stack. can be built.
Figure 2: AWS assets running vulnerable AI software applications
Orca platform features
The Orca platform allows you to obtain a complete AI and ML inventory and bill of materials (BOM) for your AWS environment. Figure 3 below shows how the Orca platform provides a complete view of all AI models (both managed and unmanaged) (including shadow AI) deployed in your AWS environment. It shows.
Figure 3: Inventory of all AI models deployed on the AWS cloud
Orca ensures that your AI models are securely configured, covering network security, data protection, access control, and IAM. For example, Orca uses AWS Key Management Service (KMS) to enable data-at-rest encryption with customer-managed keys (CMKs) to protect sensitive information used and generated by your models. Check. Figure 4 below shows how the Orca platform handles AI misconfigurations.
Figure 4: Orca checks whether the Amazon Bedrock custom model is encrypted with the CMK
Orca also has the ability to detect sensitive data within AI models. If your AI model or training data contains sensitive information, Orca will alert you so you can take appropriate steps to prevent unintended exposure. For example, if your Amazon Bedrock deployment uses training data from an Amazon Simple Storage Service (S3) bucket that contains personally identifiable information (PII), Orca sends alerts to notify you of potential risks.
In addition to identifying sensitive data, the Orca platform detects when keys and tokens for AI services and software packages are not securely exposed to code repositories.
For each detected risk, Orca provides automated guided remediation options, including AI-generated code that can be copied and pasted into a command line interface or Infrastructure as Code (IaC) provisioning tool (see below). (see Figure 5).
Figure 5: Amazon Bedrock built-in AI engine automatically generates remediation steps and code
conclusion
The rapid pace of AI innovation, combined with an overemphasis on technology’s infancy and speed of development, presents organizations with significant security challenges, but addressing these risks requires a combination of traditional and new solutions. Both are available. When you deploy Orca Security’s AI-SPM capabilities on AWS when using Amazon Bedrock or Amazon SageMaker, you get the benefits of AI services without sacrificing security.
To learn more, request a demo or visit Orca Security on AWS Marketplace.
.
.
Orca Security – AWS Partner Spotlight
Orca Security is an AWS Specialization Partner that brings context-aware security and compliance across clouds and across workload depth to AWS without the gaps in coverage, alert fatigue, and operational costs of agent-based solutions. Provide.
Contact Orca Security | Partner Overview | AWS Marketplace