– By Caroline Wong.
Back in 2022, there was this fake video of Ukrainian President Volodymyr Zelensky that popped up on Ukrainian TV, where he seemed to be telling troops to surrender. It quickly made its way around social media too. It was a deepfake, created with AI to mimic his face, voice, and mannerisms in a way that was almost eerily convincing. The video didn’t take long to debunk, but it really highlighted an important point: AI has seriously shifted how we think about deception.
This is not just a one-off situation. AI is really speeding up how cyber threats are evolving. It’s transforming phishing emails into super personalized messages, making bots act more like humans, and turning social engineering campaigns into complex psychological tactics. In the meantime, defenders are hurrying to weave AI into their detection, response, and resilience strategies.
In my upcoming book with Wiley, I discuss how AI has become a significant player in cybersecurity, no longer just something on the horizon. This is the battlefield.
Transitioning from scripts to self-learning systems
For many years now, automation has been involved in cyberattacks, whether it’s through brute-force password attempts or bot-driven denial-of-service attacks. But AI has really handed attackers something much stronger: the ability to adapt.
These days, AI-driven attacks can change on the fly. Bots have evolved from just clicking and crawling like machines; now they actually mimic human behavior to get around security controls. They take their time scrolling through web pages, mimic the natural flow of typing, and even capture that little bit of jitter in mouse movements that we all have when using our hands. These bots utilize tools such as Puppeteer Stealth and Ghost-cursor to hide their automation signatures, and they’re spread out over residential proxies to mix in with regular traffic patterns.
So, what’s the outcome? Automated actions that seem and feel just like a real person.
Deepfakes: The Intersection of Impersonation and Infrastructure
Generative AI, particularly deepfakes, has really taken digital impersonation to a whole new level of realism. With just a few minutes of audio and video that’s out there for anyone to find, attackers can easily mimic a CEO’s voice, create a fake interview, or even pull off a simulated live video call.
This ability has already been turned into a weapon. Now, deepfake voicemails and videos are being mixed with phishing emails to create multi-channel impersonation attacks. It’s interesting how strong the psychological effect can be. When we see and hear things that match up, our brains naturally tend to trust what we’re experiencing.
So, tools like GANs, autoencoders, and diffusion models have really sped up the deepfake creation process, making it easier and more scalable for everyone. What used to be just for the pros is now part of easy-to-use tools that come with cloud-based APIs.
The question now is, “Is this real?” It’s all about how fast it can spread and whether we’ll catch it in time, right?
A New Era of Phishing and Social Engineering
Phishing was once pretty straightforward to identify: you’d see misspellings, odd formatting, and weird sender names. AI has gotten rid of those red flags.
Now that attackers have access to open-source intelligence and large language models, they can create emails that sound just like an executive, mention recent company happenings, and even throw in realistic calendar links or document attachments. These attacks aren’t just generic anymore—they’re more about the context now.
AI makes it possible for phishing to happen across different languages. Translation models do more than just change text from one language to another; they really get into the local vibe, picking up on idioms, tone, and those little regional touches that make a big difference. Voice cloning tools take this ability to audio, making it possible for real-time phone scams in various languages.
Just doing the usual security awareness training isn’t going to cut it anymore. It’s not just about finding “bad grammar” anymore. It’s all about noticing when someone is trying to manipulate your trust.
Plug-and-Play Cybercrime
Easy to use Cybercrime is a serious issue that affects many people today. It’s important to stay informed about the risks and how to protect yourself online.
One of the most concerning things happening right now is the increase in Bots-as-a-Service (BaaS) and AI-driven credential stuffing platforms. Tools such as OpenBullet2 really simplify things for less experienced attackers looking to run large-scale campaigns. When you pair these tools with CAPTCHA-solving services, which often use machine learning or even human CAPTCHA farms, they can really ramp up quickly.
How Defenders Can Win—If They Move Fast Enough
Defenders aren’t powerless. In fact, they have one major advantage: data.
Security teams can access telemetry from internal systems—endpoint logs, authentication events, network flows—that attackers can’t see. With the right AI tooling, this data can be used to model “normal” behavior and flag deviations in real time.
But defenders need to evolve quickly. Static rule-based detection systems are already being outpaced. We need adaptive, learning-based systems that update themselves based on behavioral patterns and threat intelligence feeds.
- Behavioral modeling: Training AI systems on how legitimate users behave—so deviations stand out clearly.
- Intent detection: Leveraging natural language models to spot social engineering attempts based on linguistic patterns and context.
- Automated response: Deploying AI not just to detect threats but to contain them automatically—quarantining accounts, flagging anomalies, initiating secondary verifications.
The Real Stakes: Trust and Resilience
AI is changing the game when it comes to how attacks are carried out. It’s really undermining the most basic part of cybersecurity: trust.
With anyone able to create a realistic video, audio clip, or email that looks like it’s from someone we trust, how do we figure out what’s real? What are some ways we can keep communication, identity, and intent safe and sound?
The answer isn’t about being scared; it’s all about bouncing back. So, what that means is we need to be open about how AI detection tools work and how decisions are made. Working together across security, legal, product, and communications teams. Ongoing education for both employees and users is essential—not only focusing on phishing but also covering topics like synthetic media and algorithmic manipulation.
AI is changing the game for offense, but it has the potential to shake things up for defense too. Cybersecurity teams that see AI as a game changer, rather than just another tool, will really set themselves up for success in the coming decade.
We are entering an arms race fueled by automation and intelligence. The attackers are already building. The question is: are we?
