Endor Labs is using agents to expand its Application Security (APPSEC) platform to address the development risks posed by AI and Vibe coding.
Powered by Agent AI, the company claims that what it claims is the industry’s most comprehensive security dataset, so the platform prioritizes threats beyond just risk identification, proposes solutions, and implements remediation automatically.
This move comes amid a dramatic change in software development practices. The rise of AI coding assistants means that vast amounts of code are generated faster, and in many cases there is less direct human surveillance than ever before. This acceleration introduces new security complexes that legacy tools struggle to manage.
Varun Badhwar, co-founder and CEO of Endor Labs, said: “We are in the middle of the software development revolution. Until recently, 80% of our code came from open source. From now on, 80% will be generated by AI.
“Everyone is building AI agents, but most are wrappers around LLMS. What makes the agent powerful is the data below. We’ve spent years building security datasets.
Endor Labs positions the platform as essential to navigating this new landscape, citing the potential risks associated with AI-assisted development and atmospheric coding.
Statistics show that a significant percentage of AI-generated solutions may contain bugs or security vulnerabilities, with nearly 30% including potentially significant weaknesses. Traditional static analysis and vulnerability scanning tools often lack the context and speed to effectively counter these emerging threats.
To build the intelligence you need, ENDOR Labs details the extensive foundations that a team of well-known program analysis experts have implemented over the past three years.
Analysis of 4.5 million open source projects and AI models. Map over 150 different risk factors to each component. Building detailed call graphs, indexing billions of features and libraries. Exact annotation of lines of code where known vulnerabilities exist.
This deep contextual understanding promotes the platform’s new agent AI capabilities. It is designed to integrate into the software development lifecycle and act decisively rather than passively warning the team.
Agent AI designed to manage risks in the vibe coding era
At the heart of the extension platform is a specialized AI agent trained for application security tasks. These agents are designed to infer code changes, as human developers, architects and security engineers do.
Working together, AI agents review code, identify potential risks, and propose targeted fixes. This effectively enhances the capabilities of your security team without hindering the developer workflow.
The first features built on top of this new Agent AI Foundation were also announced today.
AI Security Code Review
This feature uses multiple AI agents to scrutinize all pull requests (PRs). This focuses on identifying high-risk architecture changes that often fall outside the scope of traditional Static Application Security Testing (SAST) tools. An example is:
The deployment of AI systems can be vulnerable to rapid injecting attacks. Changes to critical authentication or authorization mechanisms. Create a new public API endpoint. Changes that include implementation of encryption. Change how sensitive data is processed.
Endor Labs argues that the emergence of critical risks hidden in numerous PRs, reducing alert fatigue with context-conscious prioritization, and allowing security engineers to focus on real, critical issues without compromising atmosphere coding.
Mark Breitenbach, security engineer at Dropbox, commented:
“Traditional static analytics tools don’t really provide the lift you need. Otherwise, it’s very valuable to manually detect risks you’ve missed or missed through traditional automation.”
MCP plugin for cursor
Addressing the trend of “vibe coding” where developers prioritize speed and intuition – Metacode Protocol (MCP) plugins integrate Endor Labs’ security intelligence to manage risk directly into AI-native coding environments such as Cursor, and complement tools such as Github Copilot.
By scanning code in real time as written, it flags potential risks and helps both human developers and AI coding agents quickly implement fixes.
The purpose of this integration is to compress a potentially weeks-consuming security review process, including ticketing systems, before and after communications, and manual repairs, into an automated workflow that resolves issues within minutes and minutes before the PR is submitted.
“We are committed to providing a wide range of research opportunities,” said Chris Steffen, Vice President of Research at Enterprise Management Associates.
“They need greater visibility and context in AI-generated code, and solutions that help them discover security risks faster. Endorlabs are ahead of the game using AI innovations built specifically for application security engineers using rich data and knowledge.”
Endor Labs’ platform aims to effectively manage risk in an age where AI-driven software development and atmospheric coding is increasingly dominated by AI-driven software development and atmospheric coding, and promises to neutralize the entire class of threats before they affect production systems.
(Photo: Daniel Heron)
See: Mozilla open source tools help developers build ethical AI datasets
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.
Check out other upcoming Enterprise Technology events and webinars with TechForge here.