AI copyright ownership is a question most businesses havent answered.
The answer isn't "you." It isn't "the AI." And it might not be as simple as "whoever paid for the tool." The legal position on AI-generated IP is genuinely unsettled — contested in courts across multiple jurisdictions simultaneously, under active review by the UK Intellectual Property Office, and being litigated by some of the largest media organisations in the world. For UK businesses, the practical risk is real and growing.
Unlike almost every other asset your business creates, AI-generated content sits in legal grey territory across three separate dimensions: whether it can be protected, whether it infringes someone else's rights, and who would own it if it could be protected at all.
Three legal questions nobody has fully answered yet
The UK legal position — more nuanced than you might expect
The UK is actually ahead of most jurisdictions on this question, but not in a way that makes things simpler. The Copyright, Designs and Patents Act 1988 (CDPA) already has a provision — section 9(3) — that addresses "computer-generated works." Under this provision, the author of a computer-generated work is taken to be the person who made the arrangements necessary for its creation.
That sounds useful. The problem is that the provision was drafted in 1988 to address software-generated reports and procedurally produced outputs. It wasn't designed for a world where an AI model trained on billions of human-created works generates content that's substantively indistinguishable from human creative output.
The UK Intellectual Property Office ran a consultation on AI and IP in 2023, acknowledging that the existing framework doesn't map cleanly onto current AI capabilities. The consultation examined three possible approaches: maintain the existing framework, extend copyright protection to AI-generated works, or exclude AI-generated works from protection entirely. As of 2026, no legislative change has followed. The uncertainty continues.
Four IP risk areas for UK businesses using AI
The litigation landscape that will set your precedents
The legal uncertainty isn't abstract. It's being resolved through active litigation that will produce binding precedents affecting every business that uses AI-generated content commercially. The cases already underway include Getty Images' suit against Stability AI over the reproduction of watermarked training images, the New York Times' suit against OpenAI and Microsoft alleging copyright infringement through training data use, and multiple class action suits from writers and visual artists.
These cases are working their way through courts in the US and UK simultaneously. The outcomes — which are not expected quickly — will determine whether AI providers are liable for training data infringement, whether their outputs infringe the rights of the original creators, and by extension, what liability flows downstream to the businesses that use those outputs commercially.
The uncomfortable position for UK businesses is this: you may be building commercial assets on a foundation that court decisions in the next 12 to 24 months could destabilise. That doesn't mean you should stop using AI tools. It means you should understand your exposure and put appropriate protections in place.
Legal teams at most SMEs haven't reviewed their AI provider contracts — let alone assessed the IP implications of using AI-generated content commercially.
There's also a practical dimension that most businesses overlook entirely. If you're building a brand or product around AI-generated content, and that content turns out not to be protectable — because it lacks human authorship in the legally required sense — a competitor can copy it without consequence. The investment you made in creating it doesn't give you the protection you assumed it did.
Detect, Assess, Defend
The practical steps businesses can take now
The legal uncertainty doesn't require paralysis. There are concrete steps that reduce your exposure materially while the courts and legislators work through the larger questions.
The most important is ensuring meaningful human creative contribution to any AI-assisted output you intend to protect commercially. A prompt-to-output workflow, where a person types a brief and publishes the AI's response unchanged, is at maximum risk of being found unprotectable. A workflow where a human significantly edits, refines, curates, and develops the AI output is on much stronger ground — both legally and creatively.
The second is understanding what your AI provider's terms actually say. Some providers grant you broad rights to AI outputs. Others retain rights, restrict commercial use, or include indemnification limitations that matter enormously if a copyright claim arises. Most businesses have agreed to these terms without reading them.
The third is updating your employment contracts and IP assignment clauses to explicitly address AI-generated content. The standard "IP created in the course of employment belongs to the employer" clause was written before employees were generating commercially significant content with AI tools. Whether it covers AI-assisted output is a live question that a properly drafted clause can resolve.
How BBS helps with this
- AI Governance & Policy Drafting — We draft an IP usage policy covering AI-generated content, code and creative assets — defining ownership, approved use cases and the human contribution requirements that strengthen your copyright position.
- EU AI Act Compliance Assessment — IP risk assessment built into the compliance review, so your AI governance and your regulatory obligations are addressed together rather than separately.
- Vendor Contract Review — We identify IP and liability clauses in your AI provider agreements, surfacing the terms that matter for commercial use and flagging the gaps that need addressing.
- AI Acceptable Use Policy — We define approved use cases for AI-generated content across your organisation, reducing commercial IP exposure while giving your teams clear, workable guidance on what they can and can't do.