
The UK government is eyeing up a big shift in copyright law – one that could directly affect how your business uses (or licenses) AI. The plan? To let developers train AI models on copyrighted content without needing permission first – unless the rights holder opts out.
The goal is to boost AI innovation in the UK by making more data legally accessible. But it’s already causing tension in the creative industries – and raises plenty of red flags for in-house legal teams navigating IP risk, data provenance, and contract exposure.
Here’s what you need to know – and how to stay ahead.
The big picture
Under the current proposal, copyright holders would need to actively opt out if they don’t want their content used to train AI models. That flips the script on existing IP protections – and shifts more risk onto the rights holders themselves.
For context, you can see the UK government’s broader stance on AI innovation in its AI Regulation White Paper (2023). A consultation on copyright and AI has been ongoing, with strong opposition from creative sectors reported in outlets like The Guardian.
For businesses developing or deploying AI tools, or licensing content in any form, this could have serious implications. Think uncertainty around training data, blurred lines between creator and user, and tricky conversations around who’s liable if something goes wrong.
What does this mean for legal?
If the exemption is introduced, it could:
- Weaken traditional copyright protections, by making it the creator’s job to say no.
- Create uncertainty around how AI models were trained – especially when you’re relying on third-party vendors.
- Add complexity where your organisation is both creating and consuming content – or AI tools.
This isn’t just a niche issue for media and publishing. If you’re drafting contracts, managing vendor risk, or advising the business on digital strategy, it’s well worth a closer look.
Questions to ask now
This is a smart moment to take stock of where your business sits in the AI/content/IP mix. Useful starting points:
- Content and data use: Is your content publicly accessible – and could it be scraped for training data? If so, do you want to opt out, and how?
- Third-party AI models: Are you confident your vendors’ models are trained on lawful data? Is this covered in your contracts or DD process?
- Licensing terms: Do your current licences still reflect how your content is used – or how you’re using others’?
- Risk allocation: Are you covered if a tool you use turns out to have been trained unlawfully?
Steps you can take today
The law’s not final yet – but there’s plenty in-house teams can do to get ready:
- Map where your business intersects with generative AI – from tools and platforms to marketing content.
- Audit your key contracts for clauses around IP, usage rights, indemnities and training data.
- Engage with policy bodies or industry groups – particularly if you’ve got a stake in how this plays out. (You could start with the IPO’s updates or keep tabs on UK Parliament AI-related activity.)
- Get the right people talking internally – legal, product, marketing, procurement – so everyone’s clear on where the risks sit.
Looking ahead
The UK wants to be seen as a pro-innovation, pro-AI jurisdiction – but reforms like this highlight the messy middle ground between tech progress and IP protection. For in-house lawyers, the challenge is managing legal risk while supporting the business to innovate confidently.
the legal pool
THE MONTHLY NEWSLETTER FOR IN-THE-KNOW IN-HOUSE LAWYERS
Get the lowdown on the latest legal news and regulatory changes, as well as top tips on the trickiest of topics. Our newsletter especially for in-house lawyers keeps you one