Microsoft announces new agent AI tools for security personnel
John K. Waters by 03/31/25
Microsoft has expanded its AI-powered cybersecurity platform and introduced a set of autonomous agents to help organizations fight the rise in threats and manage the security complexities of the cloud and AI.
This update illustrates the next phase of Microsoft Security Copilot, which was released a year ago. This is because it adds 11 AI-powered agents to automate tasks such as phishing detection, data protection, vulnerability management, and threat analysis. The move highlights Microsoft’s strategy of using AI not only as a protection target, but also as a frontline defense against increasingly sophisticated cyberattacks.
“Additional agent-based AI security has become essential as more than 30 billion phishing emails were detected in 2024 alone, and cyberattacks exceed human response capabilities,” said Vasu Jakkal, Corporate Vice President of Microsoft’s Security Group.
Six of the new AI agents have been developed in-house, with five being built by security partners including Microsoft, Onetrust, Aviatrix and Tanium. The tool will begin rolling out in preview from April 2025.
“An agent approach to privacy is a game-changing experience for the industry,” said Blake Brannon, Chief Product and Strategy Director at Onetrust, in a statement. “Autonomic AI agents help our customers to help expand, enhance and increase privacy operations. The Onetrust Privacy Violation Response Agent, built using Microsoft Security Copilot, demonstrates how privacy teams can analyze and meet historically necessary analyses and complex regulatory requirements.”
Among the new additions is Microsoft Defender’s phishing triage agent, designed to filter and prioritize phishing alerts, provide explanations, and improve user feedback. The other is that a conditional access optimization agent monitors identity systems to find policy gaps and recommends fixes. Microsoft is debuting its AI-powered threat intelligence briefing agent that curates threat insights tailored to the risk profile of each organization.
This release is a surge in global interest in generating AI, and a parallel rise in what Microsoft calls “Shadow AI,” and the use of fraudulent AI within organizations, often outside of surveillance. Microsoft estimates that 57% of companies saw an increase in AI-related security incidents, even if 60% admitted they didn’t implement proper controls.
To address this, Microsoft is extending AI security attitude management across multiple clouds and models. Starting in May 2025, Microsoft Defender will support AI security visibility across Azure, AWS, and Google Cloud, including models such as Openai’s GPT, Meta’s Llama, and Google’s Gemini.
Other new safeguards include browser-based data loss prevention (DLP) tools that not only block sensitive information from entering into generative AI apps such as ChatGPT and Google Gemini, but also enhance phishing protection for Microsoft teams.
“The rise of AI has introduced a new cyber risk vector, and it is also our biggest ally,” said Alexander Stojanovic, vice president of Microsoft Security AI Applied Research, in a statement. “This is just the beginning of what security agents can do.”
For more information, see the Microsoft blog post.
About the Author
John K. Waters is the editor of many Converge360.com sites and focuses on high-end development, AI, and Future Tech. He has written about Silicon Valley’s cutting-edge technology and culture for over 20 years, and has written over 12 books. He also co-starred in the documentary film Silicon Valley: A 100 Years of Renaissance, which aired on PBS. He can reach with (Email protection).