IQONEX

AI consulting · ChatGPT

ChatGPT in business — productively, GDPR-compliantly, audit-ready.

A practical guide for German businesses that want to use ChatGPT or comparable LLMs in production without crossing data-protection or labor-law lines.

Short and honest

  • Standard ChatGPT plus client/customer data is a data-protection incident in regulated industries.
  • Azure OpenAI in an EU region with a Microsoft Germany DPA solves most of the legal exposure.
  • Pseudonymization closes the personal-data gap before the model sees anything.
  • A 1–2 page staff guideline plus a workshop unblocks productive use without playing whack-a-mole.

Why ChatGPT belongs on the agenda

ChatGPT and comparable language models save 5 to 15 hours per knowledge worker per week — drafting, summarization, research, code review, translation. Banning them is not a strategy: shadow IT will appear where the official option lags behind the private one.

The real question for management is not whether AI is allowed but how it can run cleanly: data protection, labor law, AI Act, professional codes (§203 StGB, §57 StBerG), traceability and tooling all need to fit together.

What's risky about standard ChatGPT

Personal data in ChatGPT inputs is processed in the US, may be used for training (depending on plan) and is hard to delete. None of this is intentionally shady on OpenAI's side — it's just not designed for regulated industries.

RiskCauseFix
US data transferInputs travel to OpenAI servers in the US.Use Azure OpenAI in an EU region.
Training reuseStandard plans train on inputs by default.Use Business/Enterprise or Azure OpenAI; deactivate training.
Missing DPAStandard plan ships no Article 28 DPA.Switch to a plan with a DPA, or use Azure OpenAI with a Microsoft Germany DPA.
Personal data in promptsStaff pastes names, addresses, case files.Pseudonymize before the model call.

Why Azure OpenAI in EU

Azure OpenAI gives you the same models that drive ChatGPT, but with a DPA from Microsoft Germany, EU data residency (Frankfurt or Sweden) and a tooling stack that fits production loads — Entra ID, Managed Identities, Azure Monitor, content filters and quota management.

Pseudonymization in production

Direct personal references (name, email, address, file number) are replaced by tokens before the model call. The model only sees structured pseudo-data. Re-identification happens locally after the response. Personal references never leave your area of responsibility.

  • Detect: regex / NER models / explicit field marking in the form.
  • Replace: stable pseudonyms (Name → Person_42) with a per-request mapping.
  • Re-identify: replace tokens after the model response, on your side.
  • Audit: log mapping (with appropriate access protection) to make every prompt reproducible.

The staff guideline

A 1–2 page guideline beats a 30-page policy nobody reads. We deliver a template and adjust it to your industry: what's allowed, what isn't, which tool is the official one, who to call when something looks off.

Rollout in practice (90 days)

  1. Week 1–2: workshop with management and DPO. Use cases prioritized.
  2. Week 3–6: architecture set up (Azure OpenAI, pseudonymization, staff guideline).
  3. Week 7–10: pilot on the highest-leverage use case. Measurable KPIs.
  4. Week 11–12: roll-out to wider workforce + training cycle.

Ready for a call?

30 minutes, free, no strings attached. We listen to your case and tell you honestly whether and how we can help.

Frequently asked

Is the ChatGPT Business license enough, or do we need Azure OpenAI?

ChatGPT Business has its own privacy policy but still belongs to OpenAI (US), and Microsoft does not provide a direct DPA on it. For regulated industries (law firms, medical practices, tax advisors, critical infrastructure) we recommend Azure OpenAI in EU regions — same model access, but with a DPA from Microsoft Germany and data residency in Frankfurt or Sweden.

How do we stop staff from using personal ChatGPT accounts anyway?

Three levers: first, provide an official, at-least-equivalent alternative (shadow IT appears where the official option is worse than the private one). Second, DNS or endpoint blocking on company devices. Third, clear training and policy. Pure bans without an alternative don't work in practice.

How long does a productive rollout typically take?

Consulting and architecture: 4–6 weeks. Pilot on one use case: 4–8 weeks. Roll-out to more use cases and staff training: 4–12 weeks depending on size. So typically 3–6 months from workshop to production.

What does ChatGPT in business cost per employee?

On Azure OpenAI you pay per token consumed (model costs scale with usage, not seats), plus an optional license fee for a front-end like ChatGPT Studio or a custom UI. Consolidated solutions like Microsoft 365 Copilot are charged per seat. We model the most plausible monthly bill for your team in the intro call.

Is a staff guideline mandatory?

Not legally mandatory, but necessary in practice. We recommend a 1–2 page guideline with concrete examples (what's allowed, what isn't) combined with training. It also forms part of the data-protection impact assessment — you can't seriously close a DPIA without a documented usage frame.