AI consulting · ChatGPT
ChatGPT in business — productively, GDPR-compliantly, audit-ready.
A practical guide for German businesses that want to use ChatGPT or comparable LLMs in production without crossing data-protection or labor-law lines.
Short and honest
- Standard ChatGPT plus client/customer data is a data-protection incident in regulated industries.
- Azure OpenAI in an EU region with a Microsoft Germany DPA solves most of the legal exposure.
- Pseudonymization closes the personal-data gap before the model sees anything.
- A 1–2 page staff guideline plus a workshop unblocks productive use without playing whack-a-mole.
Why ChatGPT belongs on the agenda
ChatGPT and comparable language models save 5 to 15 hours per knowledge worker per week — drafting, summarization, research, code review, translation. Banning them is not a strategy: shadow IT will appear where the official option lags behind the private one.
The real question for management is not whether AI is allowed but how it can run cleanly: data protection, labor law, AI Act, professional codes (§203 StGB, §57 StBerG), traceability and tooling all need to fit together.
What's risky about standard ChatGPT
Personal data in ChatGPT inputs is processed in the US, may be used for training (depending on plan) and is hard to delete. None of this is intentionally shady on OpenAI's side — it's just not designed for regulated industries.
| Risk | Cause | Fix |
|---|---|---|
| US data transfer | Inputs travel to OpenAI servers in the US. | Use Azure OpenAI in an EU region. |
| Training reuse | Standard plans train on inputs by default. | Use Business/Enterprise or Azure OpenAI; deactivate training. |
| Missing DPA | Standard plan ships no Article 28 DPA. | Switch to a plan with a DPA, or use Azure OpenAI with a Microsoft Germany DPA. |
| Personal data in prompts | Staff pastes names, addresses, case files. | Pseudonymize before the model call. |
Why Azure OpenAI in EU
Azure OpenAI gives you the same models that drive ChatGPT, but with a DPA from Microsoft Germany, EU data residency (Frankfurt or Sweden) and a tooling stack that fits production loads — Entra ID, Managed Identities, Azure Monitor, content filters and quota management.
Pseudonymization in production
Direct personal references (name, email, address, file number) are replaced by tokens before the model call. The model only sees structured pseudo-data. Re-identification happens locally after the response. Personal references never leave your area of responsibility.
- Detect: regex / NER models / explicit field marking in the form.
- Replace: stable pseudonyms (Name → Person_42) with a per-request mapping.
- Re-identify: replace tokens after the model response, on your side.
- Audit: log mapping (with appropriate access protection) to make every prompt reproducible.
The staff guideline
A 1–2 page guideline beats a 30-page policy nobody reads. We deliver a template and adjust it to your industry: what's allowed, what isn't, which tool is the official one, who to call when something looks off.
Rollout in practice (90 days)
- Week 1–2: workshop with management and DPO. Use cases prioritized.
- Week 3–6: architecture set up (Azure OpenAI, pseudonymization, staff guideline).
- Week 7–10: pilot on the highest-leverage use case. Measurable KPIs.
- Week 11–12: roll-out to wider workforce + training cycle.
Ready for a call?
30 minutes, free, no strings attached. We listen to your case and tell you honestly whether and how we can help.