AI consulting · Compliance
AI compliance — pragmatically, audit-ready, without buzzwords.
A practical view of what's actually required: which rules apply, which risk class your use case falls into, and how to document it so audits are routine rather than emergencies.
Short and honest
- AI compliance is the intersection of GDPR, EU AI Act, sectoral rules and your existing ISMS.
- Most generative use cases land in 'limited risk' — transparency duties, no conformity assessment.
- High-risk systems (HR, critical infrastructure, medical) need conformity, logbook and risk management.
- An audit-ready trail saves more time than any one-off audit.
What 'AI compliance' actually means
AI compliance is not a separate discipline — it's the intersection of GDPR, the EU AI Act, sector-specific rules (BaFin, MDR/IVDR, BSI for critical infrastructure) and your existing ISMS or compliance system.
EU AI Act in practice
| Risk class | Examples | Consequence |
|---|---|---|
| Prohibited | Social scoring, real-time biometric surveillance. | Not allowed. |
| High risk | AI in HR, critical infrastructure, medical devices. | Conformity assessment, logbook, risk management. |
| Limited risk | Chatbots, generative content, summaries. | Transparency duty (label as AI). |
| Minimal risk | Spam filter, recommendation in entertainment. | No specific obligations. |
GDPR for AI
The familiar set of duties applies: lawful basis, transparency, purpose limitation, data minimization, integrity, accountability. Plus AI-specific aspects: training data origin, explainability, re-identification risk, automated decisions (Article 22).
- Lawful basis for input data and (separately) for any model fine-tuning.
- DPIA per Article 35 if high risk — almost always for personal data.
- Subject rights also for AI outputs (access, deletion, objection).
- Subprocessor chain documented end-to-end.
Sectoral rules (BaFin, MDR, KRITIS)
Banks and insurers face BaFin expectations on model risk management. Medical AI lands in MDR/IVDR. Critical infrastructure operators have BSI requirements on availability and integrity. Public administration has IT security law obligations. We know the requirements and design architectures that meet horizontal and sectoral rules at the same time.
Our compliance playbook
- Inventory: which AI systems already run, what risk class are they in?
- DPIA + AI Act assessment per system (combined where possible).
- TOMs (Article 32) and DPA (Article 28) tightened to AI specifics.
- Audit trail with automated evidence (logs, model versions, prompts).
- Quarterly review and event-driven re-checks.
Ready for a call?
30 minutes, free, no strings attached. We listen to your case and tell you honestly whether and how we can help.