Avatar of Niek Lintermans

Co-Founder & CMO

Privacy and LLMs: What the Eindhoven Data Leak Teaches Us

#privacy #llm

What the data leak at the Municipality of Eindhoven teaches us about using AI tools like ChatGPT safely.

Privacy and LLMs: What the Eindhoven Data Leak Teaches Us

Reading time: 4 minutes

Privacy and LLMs: What the Eindhoven Data Leak Teaches Us (and How to Use AI Safely)

Quickly rewriting a text. Polishing an email. Or summarising a long document. For many professionals, the free version of ChatGPT has become just as natural to use as Google.

And that is exactly where the risk lies. An LLM feels like a convenient writing assistant, while in reality you are often sharing information with an external service. If privacy-sensitive data is included — even unintentionally — this can quickly turn into a data leak.


The Example That Woke Everyone Up: “Public AI” at the Municipality of Eindhoven

The Municipality of Eindhoven announced that an analysis showed that many files containing personal data of residents and employees had been uploaded to public AI websites. The data leak was reported to the Dutch Data Protection Authority on 23 October 2025.

What makes this particularly concerning is that, in a 30-day sample period (23 September to 23 October 2025), “various types” of files containing personal data were identified.

A council letter even mentioned documents such as Youth Act records, internal reports, and CVs, primarily within the social services domain.

The municipality took immediate action: public AI websites such as ChatGPT were blocked, and employees were only allowed to use Copilot within the secured municipal environment. OpenAI was also requested to delete the uploaded files, and monitoring was tightened.

This is exactly why this topic is so relevant. It is not about a Hollywood-style hack, but about a very human pattern:

“This is useful, I’ll just paste it in.”

City Hall of the Municipality of Eindhoven

Sources:


So What Was the Real Problem?

1. Public LLMs Are Not Your “Internal Word Document”

When you paste information into a public AI tool, that information leaves your organization. That alone is a risk: you lose control over where the data is processed, who can access it, how long it is stored, and under which conditions it is handled.

2. After the Fact, It Is Often Impossible to Reconstruct What Was Shared

Eindhoven indicated that the volume of data uploaded before the sampled period could not be determined. As a result, affected individuals could not be personally informed.

This is an important insight for any organization: if you only start thinking about risks after something goes wrong, you are usually already too late.

3. Banning AI Does Not Solve Human Behavior

The municipality also acknowledged that AI offered clear benefits and that employees used it to improve their work and public services.

You likely recognize this yourself: fully banning AI rarely works. People find workarounds. A sustainable solution lies in clear guidelines, safe tooling, and trained reflexes.


Why Should You Be Careful With Privacy-Sensitive Data in (Free) ChatGPT?

The simple answer: because you cannot always be certain what happens to your input — and because you remain responsible for what you share.

Consider:

  • Personal data (names, addresses, dates of birth, phone numbers)
  • Sensitive data (health information, youth care, criminal records, ethnicity, religion)
  • HR information (CVs, performance reviews, absence records)
  • Customer data, contracts, internal incidents, and confidential reports

The tricky part is that a single prompt may seem harmless, while in practice it often involves entire documents, exports, reports, or case descriptions.


The 10-Second Check Before You Paste Anything

If you want to build one habit, make it this:

  • Is this information public, or internal/confidential?
  • Can I ask my question without identifiable details? (anonymise or abstract)
  • Am I using the right environment? (public ChatGPT or a business solution such as Copilot/Enterprise)

If any of these questions raise doubt: 👉 Don’t paste it. Adjust first, or choose a safer alternative.


Practical Tips: How This Could Have Been Done More Safely

Tip 1: Use Placeholders Instead of Real Data

Still want help with a text? Turn it into a template:

  • “Jan Jansen” → [NAME]
  • “street + house number” → [ADDRESS]
  • “customer/case number” → [CASE-ID]
  • specific case details → [SITUATION IN BROAD TERMS]

You will still get useful output, without sharing privacy-sensitive information.

Tip 2: Choose the Right Tool for the Right Data

  • Free ChatGPT (public): only for information you would comfortably share externally, or fully anonymised content
  • Copilot in a business context: often more suitable for work, as it can remain within a secured environment (when properly configured with clear agreements)
  • Paid AI subscriptions: often provide additional safeguards, but they do not remove your responsibility

And perhaps most importantly: make this a team agreement, not an individual “I’ll be careful” promise.


The Core Message: AI Is Allowed — Just Pause and Think

Using AI is logical and productive. But it requires a new kind of skill. Not technical, but ethical and practical:

  • What am I putting in?
  • Where does it go?
  • Does this task fit this tool?

At Lumans, we help teams develop these reflexes through practical workshops and clear guidelines. That way, AI becomes an accelerator — without privacy turning into the weak spot.

Want to use AI safely and effectively within your organization (without locking everything down)?

Explore what’s possible on our website or get in touch via contact.