Microsoft’s threat intelligence team has discovered that state-sponsored hackers from China, Iran, North Korea, and Russia are actively exploiting Google’s Gemini and other AI chatbots for malicious purposes. These threat actors are using AI to enhance their cyberattack capabilities, including improving their phishing emails and malicious code. The hackers are particularly utilizing Gemini to refine their social engineering tactics, create more convincing phishing messages in various languages, and develop more sophisticated malware. Microsoft’s research indicates that North Korean hackers are using AI to create cryptocurrency scams, while Iranian groups are employing it to craft more persuasive disinformation campaigns. Chinese hackers are leveraging AI to improve their code development and automate certain aspects of their operations. The report highlights how AI tools are becoming increasingly integrated into state-sponsored cyber operations, making attacks more efficient and harder to detect. Security experts warn that this trend will likely accelerate in 2024, as AI becomes more accessible and sophisticated. The findings underscore the dual-use nature of AI technology and the growing need for enhanced security measures to prevent its misuse. Google has acknowledged these concerns and stated they are working to implement additional safeguards to prevent the malicious use of their AI tools.