Deepfake deception: the rise of AI-enhanced social engineering attacks
We have already discussed in our blog how generative AI has shifted certain aspects of phishing, such as crafting more convincing text messages and...
We all know that most—if not all—generative AI (GenAI) and large language models (LLMs) hallucinate from time to time. That’s why the golden rule is to double-check any important fact they give you. But let’s be honest: how many of us actually do this every time?
Now add on top of this, a behavior we've seen for years. Some users are too lazy to type full URLs into the browser. Instead, they just punch "MyBank" or "Netflix" into a search engine and click the first link they see. It’s a habit that has already been exploited by phishing sites using search engine optimization (SEO) tricks to rise to the top of the results.
Fast-forward to 2025: Some users are now switching from traditional search engines to AI-powered assistants for answers. Naturally, the same risky behavior is making its way into this new era—and the risks may be even higher.
To test how risky this shift is, researchers at Netcraft conducted a fascinating—and concerning—experiment: in it, they asked several LLMs a simple question: "Where do I log into [Bank X]?" Importantly, they didn’t use prompt injection, adversarial inputs, or even suggestive phrasing. Just basic, straightforward questions—the kind a regular user might ask.
The results? 34% of the responses pointed to non-official websites, including actual phishing pages. That’s not just a rounding error—that’s a major security concern.
Attackers are clearly adapting. Just as they’ve used traditional SEO tactics for years to game search engine rankings, they’re now targeting AI training data and web scrapers. If they can get their malicious pages indexed by AI models—or even referenced indirectly—they increase the odds that someone, somewhere, will be sent to a fake site.
And it’s not just URLs. There have already been documented cases where LLMs suggested non-existent or malicious package names to developers looking for a quick import in Python, JavaScript, or other languages. A developer in a rush might paste the package name directly into a terminal, unknowingly installing a backdoor.
AI is a powerful tool—but like any tool, it can be dangerous when used carelessly. The responsibility now lies not just with model developers and AI companies, but also with end users to stay cautious and critical. As with any form of automation, trust should be earned, not assumed.
We have already discussed in our blog how generative AI has shifted certain aspects of phishing, such as crafting more convincing text messages and...
Recent data by Statista shows that now over a million companies worldwide use Microsoft 365 to facilitate collaboration, improve productivity, and...
GenAI helps with automation For example, in the phishing part of my talk, I examined the complete chain of phishing emails—from target selection,...