IQONEX
AI for medical practices

AI in your practice — without giving up patient confidentiality.

We build AI workflows that hold up under §203 StGB, MDR and GDPR. Pseudonymization, audit log, mandatory review — productively, not theoretically.

Where the shoe pinches

Three conflicts we hear in every practice.

  • Patient data in standard ChatGPT — §203 violation

    §203 StGB

    Staff paste patient histories into ChatGPT for summarization. That breaches medical confidentiality — and the standard plan offers no DPA that meets §203.

  • Time pressure in anamnesis and documentation

    Practice efficiency

    First appointments cost 20–40 minutes of unfocused conversation. Better preparation could save 5–10 minutes per patient — but only if it's GDPR-clean.

  • MDR/IVDR uncertainty on diagnostic AI

    MDR / IVDR

    Some AI use cases (diagnostic suggestions, image analysis) fall under medical-device regulation. We tell you up front whether your case lands in a regulated category — and don't build what shouldn't be built.

What we actually build

Three use cases that run in production at our customers.

Anamnesis prep with pseudonymization

Patient fills a structured form, pseudonymized locally (name → Patient_42). AI condenses to bullet points for the doctor. Raw input and mapping stay in the practice-management system.

Effekt: 5–10 minutes saved per first appointment, no data-protection compromise.

Correspondence drafts

AI drafts referral letters, follow-up letters and patient communication. Doctor reviews and releases. Every send is in the audit log.

Effekt: Routine correspondence in half the time.

Internal knowledge base

Searchable AI assistant on your practice's guidelines, SOPs and training material. No patient data, no external transfer.

Effekt: New staff productive in days, not weeks.

Our approach for medical practices

Three stages — start where it fits.

Workshop

Half day. Use-case prioritization, §203 risk, DPO involvement.

Pilot

4–6 weeks. One use case (e.g. anamnesis or correspondence), documented, in trial.

Roll-out

Expansion across the practice, staff training, DPA bundle.

Stimmen unserer LiteLog-Kunden

Direkte Belege, dass Compliance-Software bei uns produktiv läuft – nicht nur in PowerPoint.

Thanks to LiteLog we could prove we were at the right place at the right time, and that we did our job. Without that proof, we'd have paid 30,000 € in damages."
Sekuris Dienstleistungen GmbH & Co. KG, Management · Sekuris Dienstleistungen GmbH & Co. KG
The whole tool is well laid out and easy to use. Questions get answered quickly and without fuss. I can fully recommend it."
Marco Volderauer, Managing Partner · SAÖ Dienstleistungsunternehmen KG

Frequently asked from medical practices

Which AI applications are actually allowed in a medical practice?

Allowed are applications that never see patient data (e.g. research, training material) or that work with pseudonymized data and a processor with adequate technical/organizational measures. We design workflows so the practice keeps controllership and the AI sees only the minimum.

How do you build GDPR-compliant anamnesis prep with AI?

Patient fills out a structured form, which is pseudonymized locally (name → Patient_42), AI condenses it into bullet-point notes for the doctor. Raw input and mapping stay in your practice-management system. Result: 5–10 minutes saved per first appointment, no data-protection compromise.

Are US providers like OpenAI off-limits for a German practice?

Standard OpenAI, yes. Via Azure OpenAI in an EU region with a DPA (status 2026), use in a §203-compatible setup is possible — provided the practice ensures pseudonymization and purpose limitation. We configure this in detail and deliver auditable documentation for the DPO.

We're a 30-person multi-doctor practice. What do you recommend as a starting point?

Half-day workshop for practice management and DPO (prioritize use cases), then a 4–6-week pilot on one clearly scoped use case (e.g. anamnesis or correspondence) before going broader.

What does the EU AI Act mean for medical practices?

Generative AI in patient correspondence typically falls under 'limited risk' (transparency duty). Diagnostic AI systems can become 'high risk' — those we don't build. We'll tell you honestly whether your use case lands in a regulated category.