In Google’s sophisticated Singapore office at level 3 block 80, Mark Johnston stood in front of the technology journalist’s room at 1:30pm with an astonishing entry. “In 69% of cases in Japan and Asia Pacific, organizations were notified of their own violations by external entities,” the director of the CISO’s Google Cloud office in Asia Pacific revealed.
What unfolded during the hour-long “Cybersecurity in the Age of AI” roundtable was an honest assessment of how Google Cloud AI Technologies is trying to reverse decades of defensive obstacles, despite the same artificial intelligence tools giving attackers unprecedented capabilities.
Historical background: 50 years of defensive failure
The crisis is nothing new. Johnston returned to the issue in 1972 by cybersecurity pioneer James B. Anderson, who observed that “the systems we use don’t really protect ourselves.” “What James B. Anderson said in 1972 still applies today,” Johnston said.
The persistence of basic vulnerabilities exacerbates this challenge. Threat intelligence data from Google Cloud reveals “more than 76% of violations start at the basics” (configuration errors and qualification compromises that have plagued organizations for decades). Johnston cited a recent example. “Microsoft SharePoint, a very common product that most organizations used at one point last month, also has what they call zero-day vulnerabilities.
AI Arm Race: Defenders and Attackers

Kevin Curran, a senior member of IEEE and professor of cybersecurity at Ulster University, describes the current landscape as a “high stakes arm race” in which both cybersecurity teams and threat actors out-manoble each other using AI tools. “AI is a valuable asset for defenders,” explains Curran in his media notes. “Companies implement generator AI and other automation tools to analyze huge amounts of data in real time to identify anomalies.”
However, the same technology benefits the attacker. “For threat actors, AI can help streamline phishing attacks, automate the creation of malware, and scan the network for vulnerabilities,” Curran warns. The double use nature of AI creates what Johnston calls the “defender’s dilemma.”
The Google Cloud AI Initiative aims to lean these scales in favour of advocates. Johnston argued that “AI will overturn the defender’s dilemma and tilt the scale of cyberspace to provide a critical advantage over attackers. The company’s approach focuses on what is called “the myriad use cases of generated AI in defense,” extending to vulnerability discovery, threat intelligence, secure code generation, and incident response.
Project Zero’s Big Sleep: Find What AI Humans Are Overlooking
One of Google’s most compelling examples of AI-powered defenses is Project Zero’s “Big Sleep” initiative. This uses a large language model to identify vulnerabilities in real code. Johnston shared an impressive indicator. “Big Sleep uses a generator AI tool to discover a vulnerability in an open source library. For the first time, they believed that the vulnerability was discovered by an AI service.”
The evolution of the programme shows the growing capabilities of AI. “Last month we announced that we discovered over 20 vulnerabilities in various packages,” Johnston noted. “However, when I saw the Big Sleep Dashboard today, we found 47 vulnerabilities in August that were discovered in this solution.”
The progression from manual human analysis to AI-assisted discovery represents what Johnston describes as a shift as a “manual to semi-autonomous” security operation. “Gemini drives most tasks in the security lifecycle consistently and delegates tasks that cannot be automated with sufficient reliability or accuracy.”
Automation Paradox: Promises and Dangers
The Google Cloud roadmap envisages four stages of progress: manual, assisted, semi-automated, and autonomous security operations. In the semi-autonomous phase, AI systems escalate complex decisions to human operators while handling regular tasks. At the ultimate autonomy stage, AI will “drive the security lifecycle to positive outcomes on behalf of the user.”

However, this automation presents new vulnerabilities. When asked about the risk of excessive reliance on AI systems, Johnston acknowledged the challenge. “There’s a possibility that we can attack and manipulate this service. At this point, when we look at the tools that these agents are piped, there’s no really good framework for authenticating that it’s a real tool.”
Curran reflects this concern. “The risk for businesses is that security teams are overly dependent on AI, hinder human judgment and make their systems vulnerable to attacks. They need to clearly define their human “co-pilot” and role. ”
Real-world Implementation: Controlling the unpredictable nature of AI
Google Cloud’s approach includes practical safeguards to address one of the most problematic properties of AI: the tendency to generate unrelated or inappropriate responses. Johnston illustrated this challenge with a concrete example of contextual discrepancies that could create business risk.
“If you have a retail store, you shouldn’t give medical advice instead,” Johnston explained, explaining how AI systems move to unrelated domains. “There are times when these tools can do that.” Unpredictability represents a critical responsibility for businesses deploying AI systems for their customers. This system allows off-topic answers to confuse customers, undermine brand reputation, and even create legal exposure.
Google’s Model Armor Technology addresses this by acting as an intelligent filter layer. “Using filters and features to health checks on these responses will help organizations gain confidence,” Johnston pointed out. The system screens AI output of personally identifiable information, filters out content that is inappropriate for business context, and blocks responses that can become “out-branded” for the organization’s intended use case.
The company has also seen growing concerns about the deployment of Shadow AI. Organizations are discovering hundreds of fraudulent AI tools in their networks, creating large security gaps. Google’s sensitive data protection technology attempts to address this by scanning multiple cloud providers and on-premises systems.
Scale challenges: budget constraints and increasing threats
Johnston has identified budget constraints as a key challenge facing Asia-Pacific CISOs that arise precisely when organizations face escalation of cyber threats. The paradox is tough. As attack volumes increase, organizations lack the resources to respond appropriately.
“We look at statistics, objectively speaking, we see more noise. It may be very unrefined, but more noise is more overhead and it costs more to deal with more costs,” Johnston said. Increased attack frequency creates resource drains that many organizations cannot maintain, even if individual attacks are not necessarily more sophisticated.
Financial pressures strengthen the already complex security environment. “They are looking for partners that can help them accelerate it without hiring 10 more staff or getting a bigger budget,” Johnston explained, explaining how security leaders can pressure them to do more with existing resources while increasing the threat.
There are important questions left
Despite the promising capabilities of Google Cloud AI, some key questions continue. When challenged whether the defenders were actually winning this weapons race, Johnston admitted that he had “never seen a new attack using AI before,” but said that attackers would use AI to scale existing attack methods and “create a wide range of opportunities in some aspects of the attack.”
Efficacy claims also require scrutiny. Johnston cited a 50% improvement in the speed of writing in incident reports, but he admitted that accuracy is still a challenge. “It’s certainly inaccurate, but humans make mistakes too.” The acknowledgements highlight the ongoing limitations of current AI security implementations.
Looking forward to: Preparation after Quantum
Beyond current AI implementations, Google Cloud is already preparing for the next paradigm shift. Johnston revealed that the company is “already deploying post-Quantum encryption between data centers by default” to positioning positioning for future quantum computing threats that could potentially decommission current encryption.
Verdict: Careful optimism is needed
The integration of AI into cybersecurity represents both unprecedented opportunities and significant risks. While AI technology from Google Cloud demonstrates its true capabilities in vulnerability detection, threat analysis and automated response, the same technology enhances attackers’ ability to reconnaissance, social engineering, and avoid.
Curran’s assessment offers a balanced perspective. “Given how quickly technology has evolved, organizations need to adopt a more comprehensive and aggressive cybersecurity policy if they want to go ahead of the attackers.
The success of AI-powered cybersecurity depends ultimately on how these tools are thoughtfully implemented, not on the technology itself, but rather on how they maintain human surveillance and deal with basic security hygiene. As Johnston concluded, “We need to adopt these with a low-risk approach,” highlighting the need for measured implementations rather than wholesale automation.
The AI revolution in cybersecurity is ongoing, but victory belongs to those who can balance innovation with wise risk management.
See: Google Cloud announces security team AI Ally
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event is part of TechEx and will be held in collaboration with other major technology events. Click here for more information.
AI News is equipped with TechForge Media. Check out upcoming Enterprise Technology events and webinars here.