AI phishing attacks have changed the rules. For years, training taught people to look for bad grammar.

AI has changed the economics and the quality of phishing at the same time. The old tells are gone. AI-generated phishing is personalised, contextually accurate, professionally written, and arrives in your inbox looking completely legitimate. It references your real job title, your real projects, your real colleagues. It matches the tone and register of genuine internal communication. And it bypasses spam filters that were built to catch the old kind of phishing — not this kind.

The scale shift

Traditional spear phishing — targeted, personalised attacks — took hours of manual research per target. AI can produce a bespoke, contextually accurate phishing email for every person in your organisation in minutes. The economics of targeted attacks have collapsed. Every employee is now a viable spear phishing target.

How attackers build the email

The research phase is automated and fast. Attackers use tools to scrape LinkedIn profiles, company websites, press releases, news mentions, and social media. From a single LinkedIn profile they can determine: your job title, your seniority, which projects you've publicly referenced, who your manager is, which clients you work with, and what your professional tone and interests look like.

That data is fed to an AI model. The output is an email that references a real project, is addressed to your actual name, mentions a real colleague as the apparent sender, uses the tone and vocabulary of your industry, and makes a request that is entirely plausible in your professional context. There are no red flags for a spam filter to catch, because the text is grammatically perfect and contextually appropriate. There are no red flags for a human to spot, because everything in the email appears legitimate.

The AI phishing attack chain

How AI-powered phishing is constructed and delivered
AI scrapes public data about the targetLinkedIn, news, company website, social media
Generates personalised, contextual emailReferences real projects, real colleagues, right tone and register
Bypasses spam filters — staff clicksCredentials stolen or malware installed

Why spam filters don't catch it

Traditional email security tools operate on pattern recognition. They look for known bad domains, suspicious link structures, language patterns associated with phishing templates, mismatched sender information, and attachment signatures. AI-generated phishing defeats most of these heuristics simultaneously.

The text is novel — it isn't a template that's been seen before. The language is grammatically correct and contextually appropriate. If the attacker uses a newly registered domain with a slight variation, standard filters may not flag it. The entire premise of AI phishing is that it looks exactly like a legitimate email — and it succeeds because it does.

"The old tells are gone. Security-aware staff who would have spotted the old phishing are clicking the new kind — because it looks exactly like a real email."

Spear phishing at scale

What has changed fundamentally is the economics. Spear phishing — highly targeted, personalised attacks — used to be reserved for high-value targets because they required significant manual effort. A skilled attacker might craft five to ten bespoke emails per day. AI can produce thousands. This means that businesses which previously weren't attractive enough targets to justify manual spear phishing are now absolutely within scope. The SME that would have been ignored in favour of a bank is now a viable target, because targeting it costs nothing.

Employee reviewing emails on a laptop

AI-generated phishing emails reference real projects, real colleagues, and real company news — making scepticism feel rude rather than prudent.

Detect, Assess, Defend

Defending against AI-enhanced phishing — Detect, Assess, Defend
Detect
Simulated phishing exercises
Test staff with AI-quality simulations
Email reporting button
One-click reporting for suspicious messages
Anomalous login alerts
Flag unusual account access after a click
Assess
Public data footprint?
What can an attacker learn about your staff from LinkedIn?
Multi-factor authentication deployed?
MFA stops stolen credentials being used
Staff phishing awareness level?
Last training date and quality
Defend
MFA on all accounts
Credentials alone are not enough
Staff phishing training
Scenario-based, updated for AI methods
Email authentication (DMARC/DKIM/SPF)
Stop spoofed sender domains
Incident reporting culture
No blame for clicking — blame for not reporting

How BBS helps with this

  • Staff Awareness Training — We deliver scenario-based training on recognising AI-generated personalised phishing — updated for the current threat, not the threats of five years ago. Your staff learn what the new attacks actually look like.
  • AI Security Gap Assessment — We assess your email and communication channel exposure, audit your public data footprint, and identify which individuals or roles present the most attractive targets to an AI-powered attacker.
  • Simulated Phishing Exercises — We test your team with AI-quality phishing simulations that reflect current attacker methods, identifying who needs additional training and how your reporting culture holds up under pressure.
  • AI Acceptable Use Policy — We establish clear incident reporting procedures so suspected AI-generated phishing is always logged, investigated, and used to build organisational resilience rather than quietly forgotten.