Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

State-sponsored hackers exploit AI in cyber attacks: Google

February 12, 2026

Google and Microsoft pay creators more than $500,000 to promote AI tools

February 12, 2026

The future of the global open source AI ecosystem: From DeepSeek to AI+

February 12, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, February 12
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»State-sponsored hackers exploit AI in cyber attacks: Google
Tools

State-sponsored hackers exploit AI in cyber attacks: Google

versatileaiBy versatileaiFebruary 12, 2026No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

State-sponsored hackers are exploiting sophisticated tools to fuel their unique cyberattacks, with attackers in Iran, North Korea, China, and Russia leveraging models such as Google’s Gemini to advance their attacks. They can create sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).

The quarterly AI Threat Tracker report released today reveals how government-backed threat actors have begun leveraging artificial intelligence in the attack lifecycle, including reconnaissance, social engineering, and ultimately malware development. This activity was revealed thanks to GTIG’s activities in the last quarter of 2025.

“For government-sponsored attackers, large-scale language models have become essential tools for technical research, targeting, and the rapid generation of subtle phishing lures,” GTIG researchers said in the report.

State-sponsored hacker reconnaissance targets defense sector

Iranian threat actor APT42 reportedly used Gemini to enhance reconnaissance and targeted social engineering operations. The group used AI to create official-looking email addresses for specific organizations and conducted research to establish a credible pretext for approaching targets.

APT42 created personas and scenarios designed to better engage targets, providing translations between languages ​​and deploying natural, native phrases to help avoid traditional phishing red flags such as poor grammar or awkward syntax.

UNC2970, a North Korean government-backed organization focused on defensive targeting and impersonation of corporate recruiters, utilized Gemini to profile high-value targets. The group’s reconnaissance included searching for information on major cybersecurity and defense companies, mapping specific technical jobs, and gathering salary information.

“This activity blurs the distinction between routine professional investigation and malicious reconnaissance, as attackers gather the necessary components to create customized, high-fidelity phishing personas,” GTIG noted.

Model extraction attacks are on the rise

Beyond operational exploitation, Google DeepMind and GTIG have identified an increase in model extraction attempts, also known as “distillation attacks,” aimed at stealing intellectual property from AI models.

One campaign targeting Gemini’s reasoning abilities involved matching and using over 100,000 prompts designed to force the model to output a reasoning process. The breadth of questions suggested an attempt to replicate Gemini’s reasoning abilities in a variety of tasks in target languages ​​other than English.

How model extraction attacks steal AI intellectual property. (Image: Google GTIG)

Although GTIG did not observe direct attacks on Frontier models by advanced persistent threat actors, the team identified and thwarted frequent model exfiltration attacks from private companies and researchers looking to clone proprietary logic around the world.

Google’s systems recognize these attacks in real time and have implemented defenses to protect traces of internal reasoning.

Malware incorporating AI appears

GTIG observed a malware sample tracked as HONESTCUE that uses Gemini’s API to outsource functionality generation. This malware is designed to undermine traditional network-based detection and static analysis through a multi-layered obfuscation approach.

HONESTCUE acts as a downloader and launcher framework that sends prompts via Gemini’s API and receives C# source code in response. The fileless second stage compiles and executes the payload directly in memory, leaving no artifacts on disk.

The HONESTCUE malware performs a two step attack process using Geminis API Image Google GTIG

Separately, GTIG identified COINBAIT, a phishing kit whose construction may have been accelerated by AI code generation tools. The kit was built using the AI-powered platform Lovable AI, masquerading as a major cryptocurrency exchange to collect credentials.

ClickFix campaign exploits AI chat platform

In a new social engineering campaign first observed in December 2025, we observed threat actors abusing the public sharing capabilities of generative AI services such as Gemini, ChatGPT, Copilot, DeepSeek, and Grok to host deceptive content distributing ATOMIC malware targeting macOS systems.

Attackers manipulated AI models to create realistic-looking instructions for common computer tasks and embedded malicious command-line scripts as “solutions.” By creating shareable links to these AI chat recordings, the attacker used a trusted domain to host the first stage of the attack.

Three step ClickFix attack chain exploiting the AI ​​chat platform Image Google GTIG

Underground marketplaces thrive on stolen API keys

GTIG’s observations of underground forums in English and Russian indicate a strong demand for AI-enabled tools and services. However, state-sponsored hackers and cybercriminals have struggled to develop custom AI models and instead rely on mature commercial products that they access via stolen credentials.

One of the toolkits, Xanthorox, advertised itself as a custom AI for autonomous malware generation and phishing campaign development. GTIG’s investigation revealed that Xanthorox is not a custom-built model, and is actually powered by several commercial AI products, including Gemini, which are accessed via stolen API keys.

Google response and mitigation measures

Google took action against the identified threat actors by disabling accounts and assets associated with the malicious activity. The company has also applied intelligence to enhance both its classifiers and models, allowing them to deny support to similar attacks in the future. \

“We are developing AI boldly and responsibly. This means taking proactive steps to stop malicious activity by disabling projects and accounts associated with bad actors, while continually improving our models to make them harder to exploit,” the report states.

GTIG emphasized that despite these developments, APTs and information operations actors have not achieved breakthrough capabilities that fundamentally change the threat landscape.

The findings highlight the evolving role of AI in cybersecurity as both defenders and attackers compete to harness the technology’s capabilities.

For corporate security teams in the Asia-Pacific region, especially where state-sponsored hackers from China and North Korea remain active, this report is an important reminder to strengthen defenses against AI-enhanced social engineering and reconnaissance operations.

(Photo provided by: SCARECROW works collection)

SEE ALSO: Anthropic reveals how AI-orchestrated cyberattacks really work – here’s what businesses need to know

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expos in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events. Click here for more information.

AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGoogle and Microsoft pay creators more than $500,000 to promote AI tools
versatileai

Related Posts

Tools

The future of the global open source AI ecosystem: From DeepSeek to AI+

February 12, 2026
Tools

Redefining the future of scientific research — Google DeepMind

February 11, 2026
Tools

Red Hat unifies AI and tactical edge deployment for UK MOD

February 11, 2026
Add A Comment

Comments are closed.

Top Posts

CIO’s Governance Guide

January 22, 20269 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20268 Views

Bridging the gap between AI agent benchmarks and industrial reality

January 22, 20268 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

CIO’s Governance Guide

January 22, 20269 Views

NVIDIA powers local AI art generation with RTX-optimized ComfyUI workflow

January 22, 20268 Views

Bridging the gap between AI agent benchmarks and industrial reality

January 22, 20268 Views
Don't Miss

State-sponsored hackers exploit AI in cyber attacks: Google

February 12, 2026

Google and Microsoft pay creators more than $500,000 to promote AI tools

February 12, 2026

The future of the global open source AI ecosystem: From DeepSeek to AI+

February 12, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?