AI GDPR compliance is something most businesses havent considered.

Except that was almost certainly a GDPR breach. Not a minor technicality. A reportable data protection incident that, depending on the data involved, may carry ICO enforcement risk.

This is happening in businesses across the UK every day — not through negligence or bad intent, but because the compliance frameworks that govern how personal data can be shared with third parties simply haven't kept pace with how quickly AI tools have been adopted.

The core problem

Most AI tools are operated by US-based companies. Sending personal data to them constitutes a data transfer to a third-party processor. Without a valid Data Processing Agreement and a lawful transfer mechanism in place, that transfer is unlawful under UK GDPR — regardless of how routine it felt.

Why pasting data into an AI tool is a GDPR issue

UK GDPR sets out clear rules for what happens when a business shares personal data with a third party that processes it on their behalf. Under Article 28, any such arrangement must be governed by a Data Processing Agreement — a formal contract specifying what data is processed, for what purpose, under whose instruction, and with what safeguards.

When a staff member pastes a client's name and email into ChatGPT, they are sending that personal data to OpenAI's servers. OpenAI is acting as a data processor. Without a signed DPA between your business and OpenAI, that processing has no lawful basis — full stop.

Then there is the question of international transfers. Most major AI providers — OpenAI, Google, Microsoft's consumer AI services — operate infrastructure in the United States. Under Article 44 of UK GDPR, personal data may only be transferred outside the UK if an appropriate transfer mechanism is in place. The UK has its own adequacy framework and International Data Transfer Agreements. If none of those mechanisms apply to your use of the AI tool, the transfer is independently unlawful, on top of the Article 28 violation.

These are not obscure compliance edge cases. They are foundational requirements of UK data protection law. The fact that millions of businesses are currently in breach does not make the breach lawful — it makes the enforcement risk systemic.

The ICO's position on AI and personal data

In 2023, the Information Commissioner's Office published detailed guidance on AI and data protection. Their position is unambiguous: organisations that use AI tools to process personal data must comply with the full suite of UK GDPR obligations, including transparency, lawful basis, data minimisation, and the processor agreement requirements under Article 28.

The ICO has since made clear that AI data practices are an active area of supervisory attention. They have investigated AI providers directly and indicated that businesses using AI tools without appropriate safeguards should consider themselves within scope of enforcement activity.

Crucially, the 72-hour breach notification obligation applies here. If personal data is sent to an AI tool without a valid DPA — and that constitutes a breach of data protection law — organisations are required to assess whether the incident meets the threshold for mandatory reporting to the ICO, and potentially to the affected data subjects as well. Most businesses that have been pasting client data into AI tools have made no such assessment.

"The ICO has been unambiguous: AI tool usage that involves personal data is subject to full UK GDPR compliance — and the clock on breach notification doesn't stop because you didn't know it was a breach."
The compliance gap — how a routine action becomes a reportable breach
Staff pastes personal data into AI toolcustomer names, employee records, contracts
Data sent to US-based AI serverno DPA, no UK adequacy decision reviewed
GDPR Article 28 / 44 exposurepotential reportable breach · ICO enforcement

What counts as personal data in the AI context?

One of the most common misunderstandings is the scope of what qualifies as personal data under UK GDPR. It is not limited to sensitive categories like health records or financial information. The definition is deliberately broad: any information that relates to an identified or identifiable individual.

In an AI usage context, that includes:

Staff often assume that if they are not sharing "sensitive" data — medical records, for example — they are fine. They are not. A client's name and email address is personal data. A contract reference that names the counterparty is personal data. A summary of a sales call that mentions who attended is personal data.

Professional reviewing compliance documents at desk

Most GDPR breaches involving AI tools aren't caused by hackers. They happen when well-meaning staff try to do their jobs faster.

Detect, Assess, Defend — closing the gap

Closing this compliance gap is a three-phase exercise. Detection establishes where personal data is currently flowing into AI tools. Assessment determines the severity of exposure. Defence builds the controls that prevent unlawful processing from recurring.

GDPR AI compliance — Detect, Assess, Defend
Detect
AI tool usage audit
Which tools are staff using?
DPA coverage check
Which tools have valid DPAs?
Data category review
What data types are being shared?
Assess
Which tools handle personal data?
Map tool usage to data types
Consumer vs Enterprise accounts?
Account type affects DPA availability
Data subjects informed?
Privacy notices may need updating
Defend
DPAs with approved vendors
Signed before any data flows
Data classification policy
Define what can go into AI tools
Personal data ban in AI prompts
Acceptable use policy enforced
Privacy notices updated
Inform data subjects of AI processing

It is worth noting that some AI providers do offer GDPR-compliant enterprise configurations — including DPAs, data residency options, and commitments not to use inputs for model training. The problem for most SMEs is that they are on consumer accounts, which offer none of these protections. Moving to enterprise configurations, or identifying AI tools that offer compliant terms on all tiers, is often a practical first step.

How BBS helps with this

  • GDPR & AI Privacy Integration — Full AI tool stack review for data processing compliance and DPA coverage gaps. We identify every tool your staff are using that handles personal data and assess whether the legal basis for that processing is in place.
  • AI Acceptable Use Policy — GDPR-aligned data classification rules defining what can and cannot go into AI tools. Staff get clear, practical guidance — not a policy document nobody reads.
  • Shadow AI Discovery — Identify which tools are handling personal data without proper controls. Most businesses are surprised by how many tools are in use outside of IT's awareness.
  • Vendor Configuration Review — Verify DPAs and processing agreements across your approved AI stack. We check not just whether a DPA exists, but whether it is fit for purpose and covers the data categories you are actually processing.