Shadow AI is already in your business. In April 2023, Samsung discovered three engineers leaked data.
Samsung responded by banning ChatGPT on company devices. But by then, the data was already sitting on OpenAI's servers — potentially used to train future models. There was no way to get it back.
This is called Shadow AI. And it's happening in your business too — you just might not know it yet.
What is Shadow AI?
Shadow AI is when staff use AI tools without going through your IT or compliance processes. It's the same concept as Shadow IT — using Dropbox when the company policy says OneDrive — but with an extra twist. With Shadow IT, data might end up in the wrong cloud storage. With Shadow AI, data is actively being fed into large language models: models trained on user inputs, potentially accessible to model trainers, and impossible to retrieve once submitted.
The thing is: it doesn't feel like a security incident. It feels like being efficient.
ChatGPT's default settings (until late 2023) used submitted conversations to train future models. Many people didn't know that. Many businesses still don't. The controls are better now — but only if you know to configure them.
The five categories of data that end up in AI tools
It's rarely dramatic. Staff aren't maliciously exfiltrating data — they're just trying to get things done. Here's what actually ends up in AI tools, and how:
The gap in your data loss prevention
Here's what makes Shadow AI particularly difficult: most Data Loss Prevention (DLP) tools were built for a world where data was exfiltrated by bad actors. They watch for large bulk downloads, unusual USB activity, emails with attachments going to personal addresses.
They were not designed for an employee who copies and pastes a paragraph of sensitive text into a browser tab and asks an AI to make it sound better.
Shadow AI doesn't happen in the shadows. It happens at desks, in meetings, during a normal working day — by staff just trying to do their jobs faster.
Why banning it doesn't work
Samsung tried the ban. Then employees started using personal mobile phones to access ChatGPT during work hours. Staff who've discovered that AI makes them 30% faster are not going to stop using it because of a policy memo. They'll find a way around the restriction.
The businesses that handle this well don't pretend Shadow AI isn't happening. They bring it into the open — by providing approved tools, establishing clear usage policies, and training staff on what's okay to paste and what isn't.
The goal isn't to stop your staff using AI. It's to make sure they're using it in ways that protect the business — and giving them the tools and policies to do that confidently.
Seven things a Shadow AI policy needs to cover
- Approved tools list — Which AI tools are officially sanctioned? Which are explicitly banned? A clear list, not just a vague "check with IT."
- Data classification rules — Define what data can and cannot be pasted into an AI tool. Client names and project titles? Probably fine. Contract terms and financials? No.
- Account and settings controls — For approved tools, require business accounts with data training turned off. Not personal accounts.
- Agentic AI boundaries — If you've deployed AI with access to systems (email, CRM, files), define what it's allowed to access and what it isn't.
- Incident reporting — Staff need a clear, no-blame way to report if they think they may have submitted something they shouldn't have.
- Training and awareness — Policy documents no-one reads don't work. Train staff on real examples. The Samsung story is a good one.
- Vendor review process — Before any new AI tool gets used (even informally), a lightweight security check of its data handling practices.
The regulatory dimension
Shadow AI isn't just a data security risk. It's a GDPR and EU AI Act problem too. If personal data about clients, employees, or prospects ends up in an external AI model, you may have a reportable breach. Under GDPR, you have 72 hours to notify the ICO if that data relates to individuals. Most businesses wouldn't even know it happened.
The EU AI Act adds another layer: businesses deploying or using certain categories of AI system face compliance obligations that most haven't begun to consider. Informal Shadow AI usage — where the tool, the data, and the purpose are all undocumented — makes compliance almost impossible to demonstrate.
How BBS helps with this
- Shadow AI Discovery & Risk Assessment — We identify which AI tools are currently in use across your business — including unofficial ones — and assess the data exposure risk of each.
- AI Acceptable Use Policy — We draft a practical, enforceable policy that covers tool approval, data classification rules, and staff obligations — in plain English your team will actually read.
- Staff AI Security Training — Practical, scenario-based training that teaches your team what Shadow AI is, why it matters, and how to use AI tools safely in their day-to-day work. [Full training service page coming soon]
- DLP Integration for AI — We update your existing Data Loss Prevention configuration to flag AI tool usage and add AI API endpoints to your monitoring coverage. [Full DLP service page coming soon]