AI phishing attacks have changed the rules. For years, training taught people to look for bad grammar.
AI has changed the economics and the quality of phishing at the same time. The old tells are gone. AI-generated phishing is personalised, contextually accurate, professionally written, and arrives in your inbox looking completely legitimate. It references your real job title, your real projects, your real colleagues. It matches the tone and register of genuine internal communication. And it bypasses spam filters that were built to catch the old kind of phishing — not this kind.
Traditional spear phishing — targeted, personalised attacks — took hours of manual research per target. AI can produce a bespoke, contextually accurate phishing email for every person in your organisation in minutes. The economics of targeted attacks have collapsed. Every employee is now a viable spear phishing target.
How attackers build the email
The research phase is automated and fast. Attackers use tools to scrape LinkedIn profiles, company websites, press releases, news mentions, and social media. From a single LinkedIn profile they can determine: your job title, your seniority, which projects you've publicly referenced, who your manager is, which clients you work with, and what your professional tone and interests look like.
That data is fed to an AI model. The output is an email that references a real project, is addressed to your actual name, mentions a real colleague as the apparent sender, uses the tone and vocabulary of your industry, and makes a request that is entirely plausible in your professional context. There are no red flags for a spam filter to catch, because the text is grammatically perfect and contextually appropriate. There are no red flags for a human to spot, because everything in the email appears legitimate.
The AI phishing attack chain
Why spam filters don't catch it
Traditional email security tools operate on pattern recognition. They look for known bad domains, suspicious link structures, language patterns associated with phishing templates, mismatched sender information, and attachment signatures. AI-generated phishing defeats most of these heuristics simultaneously.
The text is novel — it isn't a template that's been seen before. The language is grammatically correct and contextually appropriate. If the attacker uses a newly registered domain with a slight variation, standard filters may not flag it. The entire premise of AI phishing is that it looks exactly like a legitimate email — and it succeeds because it does.
Spear phishing at scale
What has changed fundamentally is the economics. Spear phishing — highly targeted, personalised attacks — used to be reserved for high-value targets because they required significant manual effort. A skilled attacker might craft five to ten bespoke emails per day. AI can produce thousands. This means that businesses which previously weren't attractive enough targets to justify manual spear phishing are now absolutely within scope. The SME that would have been ignored in favour of a bank is now a viable target, because targeting it costs nothing.
- Traditional mass phishing: Generic, low-quality, high volume — easy to train against
- Manual spear phishing: Targeted, high-quality, low volume — historically reserved for high-value targets
- AI spear phishing: Targeted, high-quality, unlimited volume — every employee is now a spear phishing target
AI-generated phishing emails reference real projects, real colleagues, and real company news — making scepticism feel rude rather than prudent.
Detect, Assess, Defend
How BBS helps with this
- Staff Awareness Training — We deliver scenario-based training on recognising AI-generated personalised phishing — updated for the current threat, not the threats of five years ago. Your staff learn what the new attacks actually look like.
- AI Security Gap Assessment — We assess your email and communication channel exposure, audit your public data footprint, and identify which individuals or roles present the most attractive targets to an AI-powered attacker.
- Simulated Phishing Exercises — We test your team with AI-quality phishing simulations that reflect current attacker methods, identifying who needs additional training and how your reporting culture holds up under pressure.
- AI Acceptable Use Policy — We establish clear incident reporting procedures so suspected AI-generated phishing is always logged, investigated, and used to build organisational resilience rather than quietly forgotten.