Using Consumer AI Tools Like ChatGPT Could Waive Legal Professional Privilege

 

Pasting confidential legal documents into consumer-grade LLMs? You may have just gifted them to your opponent.

Whether, and to what extent, it’s safe to load sensitive information into a consumer-grade LLM has been occupying my mind an unreasonable amount over the last year or so.

So I wasn’t surprised at all when last week the Chancellor of the High Court, Sir Colin Birss, used his address to the City of London Law Society to speak about legal professional privilege in the age of AI.  

In his speech he highlighted the decision in UK v Secretary of State for the Home Department [2026] UKUT 81 (IAC). 

In that case, the Upper Tribunal held that uploading confidential documents into an open-source AI tool such as public ChatGPT places that information on the internet in the public domain. The consequence is direct: the act breaches client confidentiality and waives legal professional privilege.

The Tribunal added that such conduct should be referred to the Information Commissioner’s Office, and may warrant referral to the SRA.

The decision did not distinguish between various consumer-grade models. For instance, arguably using a paid consumer model where it is possible to configure the settings such that the public model does not learn from interactions and information uploaded may fall outside the ruling.

What This Means for Businesses Using AI

If you paste a contract, board paper, dispute letter, HR investigation note, or any sensitive correspondence into consumer-grade Claude, Copilot, ChatGPT, Gemini, or any other consumer AI tool to ask a quick question without carefully configuring the settings, you may have just stripped that material of its protection.

If a dispute later arises, the privilege that would normally shield those communications from disclosure may already be lost.

That risk is not theoretical. Most businesses are already experimenting with AI tools across legal, HR, operations, finance and compliance functions, often without a clear internal AI governance policy in place.

In practice, that means confidential and commercially sensitive information may already be flowing into systems that were never designed to hold privileged material.

The Difference Between Consumer AI and Secure Legal AI Tools

The position is different where your lawyer uses a secure AI tool to help advise you.

Sir Colin indicated that this should not undermine privilege; he drew an analogy with consulting a textbook. The decisive factor is the security of the system, not the use of AI itself.

That distinction matters.

There is an important difference between:

  • a public consumer AI platform;

  • an enterprise AI environment with appropriate controls; and

  • a secure legal AI system operating within a professional engagement.

Unfortunately, most businesses do not yet fully understand where those lines sit.

Three Practical Steps to Reduce AI Legal Risk

1. Do Not Upload Confidential Information Into Consumer AI Tools

Treat consumer AI platforms as you would the public internet unless your organisation has properly assessed and configured the system.

That includes:

  • contracts;

  • legal advice;

  • HR investigations;

  • board materials;

  • dispute correspondence; and

  • commercially sensitive documents

2. Speak to Your Lawyers About Secure AI Use

If you want AI assistance with a legal matter, ask your lawyer whether they have a secure tool that can be used within your engagement.

Used correctly, AI can materially improve speed, efficiency and analysis. Used carelessly, it can create entirely avoidable legal exposure.

3. Review Existing AI Usage Internally

If confidential material has already been uploaded into consumer AI systems, legal teams should assess the implications early.

The privilege position in any future dispute may depend heavily on:

  • what was uploaded;

  • where it was uploaded;

  • how the platform was configured; and

  • who had access to the information.

AI Governance Is Now a Legal Risk Issue

AI tools are power tools. Not using them carefully or configuring them properly could have serious legal consequences.

Most organisations are now beyond the experimentation phase. The challenge is no longer whether employees are using AI; it is whether businesses are governing that use properly.

That means having:

  • clear internal AI policies;

  • defined approval processes;

  • staff training;

  • data handling controls; and

  • legal oversight where privileged or regulated information is involved.

The businesses that get this right will benefit enormously from AI adoption.

The businesses that do not may find themselves dealing with unintended disclosure, regulatory scrutiny, and avoidable disputes later down the line.



If your business is using AI tools across legal, HR or operational functions, it is worth ensuring the appropriate safeguards are in place.

For advice on AI governance, confidentiality, legal privilege or regulatory risk, get in touch.


Written by:

Winston Green

Director of Legal Services and Group General Counsel

 

 

Join our network of entrepreneurs and benefit from a range of support including an ecosystem of trusted partners, our weekly blog, plus webinars and whitepapers on the leading challenges founders face when scaling businesses.

 

Next
Next

UK Trade Mark Fees to Increase from 1 April 2026: What Business Owners Need to Know