The EU AI Act affects UK businesses directly. Theres a law already on the books.
It's called the EU AI Act. And the clock is already running.
The quick version — what it is
The EU AI Act is the world's first comprehensive law governing how AI systems can be used. It doesn't ban AI. What it does is create a tiered system: the riskier the use of AI, the stricter the obligations — and the bigger the consequences for getting it wrong.
It started coming into force in August 2024. The main obligations for high-risk AI are applying from August 2026. That sounds far away. It isn't — because the documentation, governance, and risk assessment work that compliance requires takes months to build properly. Starting in June 2026 isn't starting early. It's starting late.
Short answer: because the law applies to any AI system that's used in the EU or affects EU citizens — regardless of where the company building or deploying it is based. Post-Brexit UK businesses that sell to, employ, or serve EU customers are not exempt. If your AI touches EU data or EU people, the Act has reach.
The four risk tiers — where does your AI sit?
The Act divides AI into four categories based on risk. The obligations — and the potential penalties — scale accordingly. Here's how they stack up, from most to least risky:
Most businesses that have adopted AI tools in the last two years sit in the limited risk category at minimum — and many touch high risk without realising it, particularly if they use AI in any HR, credit, or customer decisioning context.
The fines — bigger than GDPR
These aren't theoretical numbers. The fine structure is deliberately calibrated to make non-compliance more expensive than compliance — even for large organisations.
| Violation type | Maximum fine |
|---|---|
| Unacceptable risk violations | €35m or 7% of global turnover |
| High-risk violations | €15m or 3% of global turnover |
| Other violations (including false information to regulators) | €7.5m or 1.5% of global turnover |
Enforcement ramps up progressively through 2026 — but "not yet enforced" does not mean "not yet in force." The obligations exist now. The documentation you fail to create today is the evidence gap that hurts you later.
The businesses that handle EU AI Act compliance well start now — not when a regulator asks questions.
Seven things you need to do now
You don't need to have perfect compliance overnight. But you do need to have started. Here's the practical sequence:
- Map your AI. List every AI tool your business uses or has built, including third-party tools embedded in your stack. Most businesses don't have this list. That's the first problem.
- Classify your risk tier. Based on what those tools do and who they affect. HR screening software? High risk. Customer chatbot? Limited risk. Probably. The "probably" is exactly why classification needs to be done properly.
- Check your vendors. If a tool you use is high-risk, your vendor needs to be compliant. And you need evidence of that — not just a verbal assurance. Ask for documentation.
- Draft or update your AI policy. The Act requires documented governance. An AI Acceptable Use Policy is the minimum starting point — it defines what's permitted, what oversight applies, and who's accountable.
- Build your transparency documentation. High-risk AI needs a technical file: what it does, how it makes decisions, what data it was trained on, what the failure modes are. This takes time to produce correctly.
- Establish human oversight. High-risk systems must have human review built in. "The AI decided" is not a compliant answer. There must be a person accountable for the decision, with the ability to override it.
- Register if required. High-risk AI systems used in certain categories must be registered in the EU AI database. This is a formal regulatory obligation, not an internal exercise.
The practical question: are UK businesses actually at risk?
Yes. The ICO — the UK's data regulator — is watching the EU AI Act closely and has stated it will consider similar frameworks for domestic legislation. That's a future risk. The immediate risk is more straightforward: if you have EU customers, the EU AI Act's reach extends to you. If you're building AI products for the EU market, the obligation is clear and present.
The businesses most exposed are those that have adopted AI tools quickly — often without formal governance — and assume compliance can wait. It can't, because the documentation obligations require evidence you've been doing things right, not just that you started doing them right after a regulator called. Retroactive compliance is much harder to demonstrate than proactive compliance.
The good news is that the compliance work isn't wasted effort. A well-run AI governance framework also reduces operational risk, improves vendor accountability, and gives you a defensible position if something goes wrong — regardless of what regulators do next.
How BBS helps with this
- EU AI Act Compliance Assessment — We map your current AI usage against the Act's four risk tiers, identify your obligations, and produce a prioritised compliance roadmap.
- AI Governance & Policy Drafting — We write your AI Acceptable Use Policy, risk management documentation, and transparency disclosures — the core documentation stack for limited and high-risk compliance. [Full compliance service page coming soon]
- Ongoing AI Compliance Monitoring — The Act requires continuous monitoring for high-risk systems. Our retainer service keeps your documentation current and flags regulatory updates as they come into effect.
- GDPR & AI Privacy Integration — AI systems often overlap with GDPR obligations, especially when processing personal data. We integrate both frameworks so you're not running two separate compliance exercises. [Full GDPR/AI integration service page coming soon]