Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Creating innovative content at your fingertips

July 4, 2025

The UK and Singapore form an alliance to guide AI into finance

July 4, 2025

StarCoder2 and Stack V2

July 4, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Friday, July 4
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Cybersecurity»Endor Labs deploys AI agents to combat the coding risks of atmosphere
Cybersecurity

Endor Labs deploys AI agents to combat the coding risks of atmosphere

versatileaiBy versatileaiApril 23, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Endor Labs is using agents to expand its Application Security (APPSEC) platform to address the development risks posed by AI and Vibe coding.

Powered by Agent AI, the company claims that what it claims is the industry’s most comprehensive security dataset, so the platform prioritizes threats beyond just risk identification, proposes solutions, and implements remediation automatically.

This move comes amid a dramatic change in software development practices. The rise of AI coding assistants means that vast amounts of code are generated faster, and in many cases there is less direct human surveillance than ever before. This acceleration introduces new security complexes that legacy tools struggle to manage.

Varun Badhwar, co-founder and CEO of Endor Labs, said: “We are in the middle of the software development revolution. Until recently, 80% of our code came from open source. From now on, 80% will be generated by AI.

“Everyone is building AI agents, but most are wrappers around LLMS. What makes the agent powerful is the data below. We’ve spent years building security datasets.

Endor Labs positions the platform as essential to navigating this new landscape, citing the potential risks associated with AI-assisted development and atmospheric coding.

Statistics show that a significant percentage of AI-generated solutions may contain bugs or security vulnerabilities, with nearly 30% including potentially significant weaknesses. Traditional static analysis and vulnerability scanning tools often lack the context and speed to effectively counter these emerging threats.

To build the intelligence you need, ENDOR Labs details the extensive foundations that a team of well-known program analysis experts have implemented over the past three years.

Analysis of 4.5 million open source projects and AI models. Map over 150 different risk factors to each component. Building detailed call graphs, indexing billions of features and libraries. Exact annotation of lines of code where known vulnerabilities exist.

This deep contextual understanding promotes the platform’s new agent AI capabilities. It is designed to integrate into the software development lifecycle and act decisively rather than passively warning the team.

Agent AI designed to manage risks in the vibe coding era

At the heart of the extension platform is a specialized AI agent trained for application security tasks. These agents are designed to infer code changes, as human developers, architects and security engineers do.

Working together, AI agents review code, identify potential risks, and propose targeted fixes. This effectively enhances the capabilities of your security team without hindering the developer workflow.

The first features built on top of this new Agent AI Foundation were also announced today.

AI Security Code Review

This feature uses multiple AI agents to scrutinize all pull requests (PRs). This focuses on identifying high-risk architecture changes that often fall outside the scope of traditional Static Application Security Testing (SAST) tools. An example is:

The deployment of AI systems can be vulnerable to rapid injecting attacks. Changes to critical authentication or authorization mechanisms. Create a new public API endpoint. Changes that include implementation of encryption. Change how sensitive data is processed.

Endor Labs argues that the emergence of critical risks hidden in numerous PRs, reducing alert fatigue with context-conscious prioritization, and allowing security engineers to focus on real, critical issues without compromising atmosphere coding.

Mark Breitenbach, security engineer at Dropbox, commented:

“Traditional static analytics tools don’t really provide the lift you need. Otherwise, it’s very valuable to manually detect risks you’ve missed or missed through traditional automation.”

MCP plugin for cursor

Addressing the trend of “vibe coding” where developers prioritize speed and intuition – Metacode Protocol (MCP) plugins integrate Endor Labs’ security intelligence to manage risk directly into AI-native coding environments such as Cursor, and complement tools such as Github Copilot.

By scanning code in real time as written, it flags potential risks and helps both human developers and AI coding agents quickly implement fixes.

The purpose of this integration is to compress a potentially weeks-consuming security review process, including ticketing systems, before and after communications, and manual repairs, into an automated workflow that resolves issues within minutes and minutes before the PR is submitted.

“We are committed to providing a wide range of research opportunities,” said Chris Steffen, Vice President of Research at Enterprise Management Associates.

“They need greater visibility and context in AI-generated code, and solutions that help them discover security risks faster. Endorlabs are ahead of the game using AI innovations built specifically for application security engineers using rich data and knowledge.”

Endor Labs’ platform aims to effectively manage risk in an age where AI-driven software development and atmospheric coding is increasingly dominated by AI-driven software development and atmospheric coding, and promises to neutralize the entire class of threats before they affect production systems.

(Photo: Daniel Heron)

See: Mozilla open source tools help developers build ethical AI datasets

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber ​​Security & Cloud Expo.

Check out other upcoming Enterprise Technology events and webinars with TechForge here.

tag: Agents, AI, Artificial Intelligence, Assistants, Coding, Cybersecurity, Development, InfoSec, Programming, Security, Vibe Coding

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleLattica emerges from stealth and solves AI’s biggest privacy challenge with FHE
Next Article chatgpt ai barbie trend: experts warn that it may come at cost – the hidden danger you need to know
versatileai

Related Posts

Cybersecurity

AI-powered security: Enhance endpoints in a changing corporate environment

July 1, 2025
Cybersecurity

Cycraft launches Xecguard:LLM Firewall for trustworthy AI

July 1, 2025
Cybersecurity

AI Data Security: 83% compliance gap facing pharmaceutical companies

July 1, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views
Don't Miss

Creating innovative content at your fingertips

July 4, 2025

The UK and Singapore form an alliance to guide AI into finance

July 4, 2025

StarCoder2 and Stack V2

July 4, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?