Google said the large-scale language model developed to find vulnerabilities recently discovered a bug that hackers are preparing to use.
At the end of last year, Google announced an AI agent called Big Sleep. This is a project that evolved from the work of vulnerability research supported by large-scale language models conducted by Google Project Zero and Google Deepmind. This tool actively searches and locates unknown security vulnerabilities in the software.
On Tuesday, Google said Big Sleep was able to discover CVE-2025-6965. This is a critical security flaw that Google said “is known to threaten actors and there is a risk of being exploited.”
The vulnerability affects SQLite, an open source database engine that is popular among developers. Google claims that “we can actually predict the imminent use of a vulnerability,” and that it was able to cut it off beforehand.
“We believe this is the first time that AI agents have been used directly to block efforts to exploit wild vulnerabilities,” the company said.
A Google spokesperson said future news recorded was that the company’s threat intelligence group “could identify artifacts indicating that threat actors are staging a day of zero, but could not immediately identify vulnerabilities.”
“The limited metrics were passed on to other Google team members of the Zero Day Initiative, and used great sleep to isolate vulnerabilities the enemy had prepared to take advantage of their business,” they said.
The company declined to elaborate on who the threat actors were or what indicators were found.
In a blog post promoting various AI developments, Google said it has discovered multiple real-world vulnerabilities since Big Sleep debuted in November, and “exceeds” the company’s expectations.
Google said it is currently using Big Sleep to secure open source projects and is calling AI agents a “game changer.” This is because they “free up security teams and concentrate on high-multiple threats, dramatically expanding their impact and reach.”
Tech Giant has published a white paper on how he built his AI agent in a way that protects privacy, limits potential “fraud” and is said to work transparently.
Dozens of companies and US government agencies are struggling to develop AI tools built to quickly search and discover code vulnerabilities.
Next month, the US Department of Defense will announce the winners of the long-standing competition to use AI to create systems that can automatically protect critical code that supports the prominent systems used around the world.
Recorded future
Intelligence Cloud.
learn more.