Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Beginner’s Guide to AI Image Generation Tools

October 6, 2025

Google’s new AI agent rewrites the code to automate vulnerability fixes

October 6, 2025

Scaling of volatile ML models in production

October 6, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Monday, October 6
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Google’s new AI agent rewrites the code to automate vulnerability fixes
Tools

Google’s new AI agent rewrites the code to automate vulnerability fixes

versatileaiBy versatileaiOctober 6, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Google DeepMind has deployed a new AI agent designed to autonomously identify and fix critical security vulnerabilities in software code. A well-named CodeMender, the system already offers 72 security fixes to open source projects established over the past six months.

Identifying and patching vulnerabilities is notoriously a difficult and time-consuming process, even with the help of traditional automated methods like fuzzing. Google Deepmind’s own research, including AI-based projects such as Big Sleep and OSS-Fuzz, has proven effective in discovering new zero-day vulnerabilities in audited code. But this success creates new bottlenecks. As AI accelerates flaw discovery, the burden on human developers to fix them increases.

CodeMender is designed to address this imbalance. It acts as an autonomous AI agent that takes a comprehensive approach to fixing code security. Both of its features are reactive, allowing you to instantly patch newly discovered vulnerabilities, become aggressive, rewrite existing code and eliminate the entire class of security flaws before they are exploited. This allows human developers and project maintainers to spend time building features and improving software features.

The system works by taking advantage of the advanced inference capabilities of Google’s recent Gemini Deep Think model. This foundation allows agents to debug and resolve complex security issues with high degree of autonomy. To achieve this, the system is equipped with a set of tools that allow analysis and inference on the code before implementing changes. CodeMender also includes a validation process to ensure that the changes are correct and does not introduce any new problems known as regression.

Large language models are rapidly advancing, but mistakes can have costly consequences when it comes to code security. Therefore, CodeMender’s automatic verification framework is essential. The proposed changes fix the root cause of the problem and systematically confirm that they are functionally correct, do not break existing tests, and are in compliance with the project’s coding style guidelines. Only high-quality patches that meet these strict standards have emerged for human reviews.

To enhance code modification effectiveness, the DeepMind team has developed new techniques for AI agents. CodeMender employs advanced programmatic analysis using a suite of tools such as static and dynamic analysis, differential testing, fuzzing, and SMT solvers. These devices allow you to systematically scrutinize code patterns, control flows, and data flows to identify the fundamental causes of security flaws and architectural weaknesses.

The system also uses a multi-agent architecture in which specialized agents are deployed to address specific aspects of the problem. For example, a dedicated large-scale language model-based critique tool reveals the difference between the original code and the modified code. This allows the primary agent to ensure that the proposed changes do not introduce unintended side effects and self-correct the approach if necessary.

In one practical example, CodeMender addressed an vulnerability in which crash reports showed heap buffer overflow. The final patch required a few lines of code to be changed, but the root cause was not immediately clear. By using debuggers and code search tools, the agent determined that the real problem was an incorrect stack management issue with extensible markup language (XML) elements during parsing elsewhere in the codebase. In another case, the agent devised a non-trivial patch for complex object lifetime issues and modified the custom system for generating C code within the target project.

CodeMender is designed to not only respond to existing bugs, but also actively enhance your software against future threats. The team deployed agents and applied the -Founds -Safety annotation to some of LibWebp, a widely used image compression library. These annotations tell the compiler to add limit checks to your code. This prevents an attacker from exploiting buffer overflow to execute arbitrary code.

This work is particularly relevant given that the heap buffer overflow vulnerability in LibWebp, tracked as CVE-2023-4863, was used by threat actors in zero-click iOS exploits a few years ago. DeepMind notes that when these annotations are placed, along with that particular vulnerability and most other buffer overflows in the annotated section, it has become inexplainable.

Proactive code modifications for AI agents involve sophisticated decision-making processes. Applying annotations allows you to automatically fix new compilation errors and test failures resulting from your own changes. When the validator detects that the feature is working, the agent will self-correct based on the feedback and try another solution.

Despite these promising early results, Google Deepmind has adopted a careful and intentional approach to deployment with a focus on reliability. Currently, all patches generated by CodeMender have been reviewed by human researchers before being submitted to an open source project. The team is gradually increasing submissions to ensure high quality and systematically incorporate feedback from the open source community.

Looking ahead, researchers plan to use patches generated by codemenders to reach out to maintainers of key open source projects. By repeating community feedback, they ultimately want to release CodeMender as a tool that is publicly available to all software developers.

The Deepmind team will also publish technical papers and reports in the coming months to share technology and results. This task represents the first step in investigating the possibility that AI agents will actively modify their code and fundamentally enhance software security for everyone.

See: CAMIA Privacy Attacks reveal what AI Models remember

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expo in Amsterdam, California and London. The comprehensive event is part of TechEx and will be held in collaboration with other major technology events, including the Cyber ​​Security Expo. Click here for more information.

AI News is equipped with TechForge Media. Check out upcoming Enterprise Technology events and webinars here.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleScaling of volatile ML models in production
Next Article Beginner’s Guide to AI Image Generation Tools
versatileai

Related Posts

Tools

Scaling of volatile ML models in production

October 6, 2025
Tools

EU AI adoption delays China amid regulatory hurdles

October 5, 2025
Tools

Why AI Phishing Detection Defines Cybersecurity in 2026

October 4, 2025
Add A Comment

Comments are closed.

Top Posts

Large-scale trust: the key to business-enabled agent AI

September 30, 20253 Views

AI Art Generators like Piclumen Transform Digital Archeology and Creative Industries 2025 | AI News Details

September 30, 20253 Views

Meta has created a game to track employee AI use and promote adoption

October 3, 20252 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Large-scale trust: the key to business-enabled agent AI

September 30, 20253 Views

AI Art Generators like Piclumen Transform Digital Archeology and Creative Industries 2025 | AI News Details

September 30, 20253 Views

Meta has created a game to track employee AI use and promote adoption

October 3, 20252 Views
Don't Miss

Beginner’s Guide to AI Image Generation Tools

October 6, 2025

Google’s new AI agent rewrites the code to automate vulnerability fixes

October 6, 2025

Scaling of volatile ML models in production

October 6, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?