AI-Powered Scams Are Getting Worse, Cybersecurity Expert Warns

Laura Kankaala, head of threat intelligence at F-Secure, a Finnish cybersecurity company, warns that online scams are becoming increasingly sophisticated—and artificial intelligence is making them even more dangerous. With nearly a decade of experience in cybersecurity, Kankaala has witnessed the evolution of cybercrime from isolated ransomware attacks to widespread, AI-enhanced schemes that target individuals and companies alike.

Cybercrime is almost always financially motivated, whether through ransomware attacks that lock systems until payment is received or through stealing personal data that’s sold on the dark web. As technology has become integral to daily life—with constant smartphone use, remote work, and social media presence—people have become easier targets. “Our data is becoming more valuable, and there are more ways that cybercriminals can benefit from stealing it,” Kankaala explains.

The sophistication of modern scams is alarming. Kankaala’s team has uncovered numerous innovative attack methods, including a Telegram bot that generates malware in different languages based on users’ country codes, Android malware disguised as wedding invitations, and fake profiles created using information from recently deceased people. The barrier to entry for cybercriminals has dramatically lowered thanks to phishing toolkits freely available online, complete with step-by-step instructions and ready-made fake websites that mimic legitimate platforms.

AI is accelerating this threat landscape. Scammers are now using deepfakes, voice clones, and video filters to create highly convincing fraudulent scenarios. Kankaala has documented romance scams where criminals use deepfake video filters to impersonate celebrities during video chats on dating apps. In one case, a CEO’s voice was cloned using AI tools and used to send a WhatsApp voice note requesting money transfers. These AI-powered investment scams are becoming increasingly difficult to detect.

Despite the challenges, Kankaala remains optimistic that cybersecurity awareness is improving and that her work helps protect people every day. However, the arms race between security professionals and AI-enhanced cybercriminals continues to intensify.

Key Quotes

Our data is becoming more valuable, and there are more ways that cybercriminals can benefit from stealing it.

Laura Kankaala, head of threat intelligence at F-Secure, explains how increased digital exposure has made individuals and companies more vulnerable to cyberattacks as technology has become integral to daily life.

AI is increasingly being used as a tool for these attacks. It’s creating better-looking scams, while deepfakes, voice clones, and video filters make it easier to fool people into believing things on the internet.

Kankaala warns about the growing role of artificial intelligence in cybercrime, highlighting how AI tools are making scams more sophisticated and harder to detect than traditional methods.

Cybercrime is easier to do than ever before, and these toolkits will become more advanced and widely available. It’s a big problem.

The cybersecurity expert emphasizes how freely available phishing toolkits and malware-as-a-service platforms are lowering the barrier to entry for cybercriminals, democratizing sophisticated attack methods.

The volunteer still fell for our scam, even knowing she would be hacked.

Kankaala describes a demonstration for Finnish television where even a participant who knew she was being targeted still fell victim to their sophisticated phishing attack, illustrating how effective modern hacking techniques have become.

Our Take

This article reveals a troubling reality: AI has fundamentally shifted the cybersecurity landscape from a technical challenge to an existential threat to digital trust itself. The most alarming aspect isn’t just that AI enables more sophisticated attacks—it’s that AI is democratizing advanced cybercrime capabilities to anyone with internet access. When voice cloning and deepfake technology can convincingly impersonate CEOs and celebrities, traditional verification methods collapse. The romance scams using real-time deepfake filters represent a particularly insidious evolution, exploiting human emotion and trust at scale. What’s needed now is a paradigm shift in how we approach digital authentication, moving beyond passwords and even biometrics toward multi-layered verification systems that can detect AI-generated content. The cybersecurity industry must leverage AI defensively as aggressively as criminals use it offensively, creating an AI arms race that will define digital security for the next decade.

Why This Matters

This story highlights a critical intersection between AI advancement and cybersecurity threats that affects everyone with an online presence. As AI tools become more accessible and sophisticated, they’re democratizing cybercrime—enabling even non-technical criminals to launch convincing attacks using deepfakes, voice cloning, and automated malware generation. The implications are profound: traditional security measures based on human verification are becoming obsolete when AI can perfectly mimic voices, faces, and writing styles.

For businesses, this represents an escalating threat to both financial assets and data security, requiring significant investment in AI-powered defense systems. For individuals, it means heightened vulnerability to romance scams, investment fraud, and identity theft. The fact that even a cybersecurity-aware volunteer fell for a staged hack demonstrates how effective these techniques have become. This trend will likely accelerate as AI capabilities improve, forcing a fundamental rethinking of digital trust and authentication methods across industries. The cybersecurity sector must evolve rapidly to counter AI-enhanced threats, creating both challenges and opportunities for innovation in protective technologies.

For those interested in learning more about artificial intelligence, machine learning, and effective AI communication, here are some excellent resources:

Source: https://www.businessinsider.com/hacker-online-scams-getting-worse-ai-cybersecurity-expert-2024-9