State-backed hackers from Iran, China, and North Korea are actively using Google’s Gemini AI chatbot to enhance their cyberattack operations, according to a new report released Wednesday by Google’s Threat Intelligence Group. While the technology is providing productivity gains for malicious actors, it hasn’t yet enabled breakthrough capabilities that fundamentally change the threat landscape.
The report reveals that threat actors are leveraging Gemini for various operational tasks, including generating code, researching potential targets, and identifying network vulnerabilities. Rather than creating entirely new attack methods, the AI technology is allowing hackers to work faster and at higher volumes. Disinformation campaigns are also benefiting from the tool, using it to develop fake personas, translate content, and craft messaging.
Iranian hackers emerged as the most prolific users of Gemini, employing the AI chatbot to craft sophisticated phishing campaigns and conduct reconnaissance on defense experts and organizations. Chinese state-backed actors have focused primarily on using the technology for troubleshooting code and gaining deeper access to target networks. Meanwhile, North Korean hackers have taken a unique approach, using Gemini to craft fake cover letters and research job opportunities as part of an elaborate scheme to secretly place agents into remote IT positions at Western companies.
This North Korean operation aligns with warnings from US officials last year about a mass extortion scheme involving North Korean operatives using false or stolen identities to secure remote positions in American firms. Google emphasized that Gemini’s built-in safeguards have successfully prevented hackers from conducting more sophisticated attacks, such as accessing information that could be used to manipulate Google’s own products.
The Threat Intelligence Group noted that the AI landscape remains in constant flux, with new models and agentic systems emerging daily. While current large language models (LLMs) are unlikely to enable breakthrough capabilities for threat actors on their own, hackers continue to experiment with new ways to exploit these tools. The findings align with a recent report from the UK’s National Cyber Security Centre, which concluded that while AI will increase the volume and impact of cyberattacks, the overall effect will be uneven across different threat categories.
Key Quotes
Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities.
Google’s Threat Intelligence Group stated this in their report, emphasizing that while hackers are benefiting from AI tools, they haven’t achieved breakthrough capabilities that fundamentally change the threat landscape.
Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume.
This key finding from Google’s report suggests that AI is acting as a force multiplier for existing attack methods rather than creating entirely new categories of threats, which has important implications for cybersecurity strategy.
Current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors. We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily.
Google’s cybersecurity unit provided this assessment while acknowledging the rapidly evolving nature of AI technology, suggesting that while current risks are manageable, continuous monitoring is essential.
Our Take
This report marks a critical inflection point in the AI security debate, moving from theoretical concerns to documented real-world exploitation. What’s particularly noteworthy is Google’s transparency in acknowledging that its own AI tools are being used by adversaries—a refreshing departure from the typical corporate reluctance to discuss security vulnerabilities. The fact that safeguards are currently holding suggests that responsible AI development can stay ahead of malicious actors, but the ‘constant flux’ caveat is crucial. As AI models become more capable and autonomous, the window between deployment and exploitation will narrow. The North Korean remote work infiltration scheme is especially concerning, as it demonstrates how AI can enable sophisticated social engineering at scale. This isn’t just about technical vulnerabilities—it’s about AI enabling new operational models for state-sponsored espionage and cybercrime.
Why This Matters
This report represents the first major confirmation from a leading tech company that state-sponsored hackers are actively weaponizing consumer AI tools for cyberattacks and disinformation campaigns. The findings have significant implications for the AI industry, as they validate long-standing concerns from security experts about the dual-use nature of generative AI technology.
For businesses, this development underscores the escalating cybersecurity arms race where both defenders and attackers leverage the same AI tools. Companies must now contend with adversaries who can operate at higher speeds and volumes, even if the fundamental nature of attacks hasn’t changed. The revelation that North Korean operatives are using AI to infiltrate Western companies through remote work positions highlights new vulnerabilities in the distributed workforce model.
The report also raises important questions about AI safety and responsible deployment. While Google’s safeguards appear to be working, the constant evolution of AI models means security measures must continuously adapt. This could influence future AI regulation and the development of industry standards for preventing malicious use of AI systems.
Recommended Reading
For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources: