IQONEX

AI consulting · Compliance

AI compliance — pragmatically, audit-ready, without buzzwords.

A practical view of what's actually required: which rules apply, which risk class your use case falls into, and how to document it so audits are routine rather than emergencies.

Short and honest

  • AI compliance is the intersection of GDPR, EU AI Act, sectoral rules and your existing ISMS.
  • Most generative use cases land in 'limited risk' — transparency duties, no conformity assessment.
  • High-risk systems (HR, critical infrastructure, medical) need conformity, logbook and risk management.
  • An audit-ready trail saves more time than any one-off audit.

What 'AI compliance' actually means

AI compliance is not a separate discipline — it's the intersection of GDPR, the EU AI Act, sector-specific rules (BaFin, MDR/IVDR, BSI for critical infrastructure) and your existing ISMS or compliance system.

EU AI Act in practice

Risk classExamplesConsequence
ProhibitedSocial scoring, real-time biometric surveillance.Not allowed.
High riskAI in HR, critical infrastructure, medical devices.Conformity assessment, logbook, risk management.
Limited riskChatbots, generative content, summaries.Transparency duty (label as AI).
Minimal riskSpam filter, recommendation in entertainment.No specific obligations.

GDPR for AI

The familiar set of duties applies: lawful basis, transparency, purpose limitation, data minimization, integrity, accountability. Plus AI-specific aspects: training data origin, explainability, re-identification risk, automated decisions (Article 22).

  • Lawful basis for input data and (separately) for any model fine-tuning.
  • DPIA per Article 35 if high risk — almost always for personal data.
  • Subject rights also for AI outputs (access, deletion, objection).
  • Subprocessor chain documented end-to-end.

Sectoral rules (BaFin, MDR, KRITIS)

Banks and insurers face BaFin expectations on model risk management. Medical AI lands in MDR/IVDR. Critical infrastructure operators have BSI requirements on availability and integrity. Public administration has IT security law obligations. We know the requirements and design architectures that meet horizontal and sectoral rules at the same time.

Our compliance playbook

  1. Inventory: which AI systems already run, what risk class are they in?
  2. DPIA + AI Act assessment per system (combined where possible).
  3. TOMs (Article 32) and DPA (Article 28) tightened to AI specifics.
  4. Audit trail with automated evidence (logs, model versions, prompts).
  5. Quarterly review and event-driven re-checks.

Ready for a call?

30 minutes, free, no strings attached. We listen to your case and tell you honestly whether and how we can help.

Frequently asked

What does the EU AI Act change for my business?

The AI Act sorts AI systems into four risk classes: prohibited, high risk, limited risk, minimal risk. Most generative AI applications in everyday business (correspondence, research, summaries) fall under 'limited risk' (transparency duty). High-risk systems (e.g. AI in HR decisions or critical infrastructure) need conformity assessment, logbook and risk management.

Which industries have specific AI rules?

Finance (BaFin expectations on model risk management), healthcare (MDR/IVDR for medical AI products), critical infrastructure (BSI requirements on availability and integrity), public administration (IT security law). We know the requirements and design architectures that meet horizontal and sectoral rules simultaneously.

How do I run a data-protection impact assessment for AI?

We use the WP248 methodology of the Article 29 Working Party, extended with AI-specific aspects: data origin, model training data, explainability, bias, re-identification risk. The DPIA is aligned with the DPO, documented, and stored in a revisionable way.

Do I need my own compliance-management system for AI?

If you already have a compliance or ISMS (ISO 27001, BSI IT-Grundschutz, TISAX), an AI-specific extension is usually enough. Without an existing CMS we recommend setting up AI compliance properly from the start — the structures carry over to other IT security and data protection topics.

How often do I need to review AI compliance?

At least annually, plus event-driven on model updates, new use cases or regulatory changes. We build an audit trail that produces reports automatically during routine reviews — manual effort stays low.