This article is part of the special issue of VentureBeat, “Cyber Resilience Playbook: Naviging of Threats.” For more information about this special issue, please see here.
Generated AI raises interesting security questions and as businesses move into the world of agents, their safety issues increase.
When AI agents enter the workflow, they need to have access to sensitive data and documents to do their job, a major risk for many security-oriented companies.
“The increased use of multi-agent systems introduces new attack vectors and vulnerabilities that can be exploited if they are not properly protected from the start.” “However, the impact and harm of these vulnerabilities is , it could be even bigger because the amount of attachment points and interfaces that multi-agent systems have is increasing.”
Why AI agents pose such high security risks
AI agents (autonomous AI that performs actions on behalf of users) have been extremely popular in the past few months. Ideally, you can connect to boring workflows and perform any task, from finding information based on internal documents to creating recommendations that human employees should take .
However, they present an interesting issue to enterprise security experts. They need to access data that makes personal information effective without accidentally opening or sending it to others. As agents do more tasks, human employees once did this, creating issues of accuracy and accountability, and headaches for security and compliance teams.
AWS CISO Chris Betz told BentureBeat that the searched high-generation (RAG) and agent use cases are “a compelling and interesting angle of security.”
“Organisations need to think about what an organization’s default sharing looks like, because agents find things in searches that support their mission,” says Betz. “If you want to overshare documents, you need to think about default sharing policies within your organization.”
Security experts should ask if an agent should be considered a digital employee or software. How much access does the agent need? How should they be identified?
AI Agent Vulnerability
Gen AI makes many businesses more aware of potential vulnerabilities, but agents were able to open them up to more issues.
“Attacks that affect single agent systems, such as data addiction, rapid injection, and social engineering, can affect agent behavior.
Companies need to pay attention to what agents have access to to ensure that data security remains.
Betts noted that many security issues surrounding access to human employees can be extended to agents. So it “results in making people accessible only to the right and the right thing.” He added that “each of these stages is an opportunity” for hackers when it comes to agent workflows with multiple steps.
Give your agent an identity
One answer is to issue a specific access identity to the agent.
He said the world where models infer about problems over several days is “a world where we need to think more about recording the identity of an agent and the identity of the person in charge of that agent anywhere within the organization.” Jason Clinton, CISO of humanity.
Identifying human employees is something businesses have been doing for a very long time. They have a specific job. They have the email address they use to sign the signature and be tracked by their IT administrator. I have a physical laptop with an account that I can lock. They get individual permission to access some data.
This type of employee access and identification variation can be deployed to agents.
Both Betz and Clinton believe this process can inspire enterprise leaders to rethink how they provide information access to their users. It can even lead an organization to overhaul the workflow.
“Using agent workflows actually provides the opportunity to bind use cases for each step along with the data required as part of the RAG, but only the data needed,” Betz said. It’s there.
He added that the agent workflow “helps address some of these concerns about oversharing” as companies need to consider data that has access to full action. Clinton added that in a workflow designed around a particular set of operations, “there is no reason to access the same data that Step 1 requires.”
Old-fashioned audits aren’t enough
Companies can also search for agent platforms that allow them to peek at how agents work. For example, Don Schuerman, CTO at Workflow Automation Provider Pega, said his company can help ensure agents secure by telling users what they are doing.
“Our platforms are already used to audit work that humans do, so we can also audit every step that an agent is doing,” Schuerman told VentureBeat.
Pega’s latest product, AgentX, allows you to switch to a screen where human users provide an overview of the steps an agent takes. Users can see where they are along the agent’s workflow timeline and get a read out of a particular action.
Audit, timeline, and identification are not the perfect solution to the security issues presented by AI agents. However, as companies begin to explore the potential of agents and deploy them, more targeted answers may emerge as AI experiments continue.