IQONEX

February 12, 2026

AISecurityKRITIS

AI in the security industry — what works, what doesn't

Field notes on AI use cases in security firms, gathered from running our own product LiteLog with KRITIS customers.

By Jan Bamesberger

Security firms are an interesting AI test case: high paperwork volume, strict compliance under §34a GewO and KRITIS regulations, geographically distributed staff, low-margin business. AI either lifts margins or it doesn't move the needle. There's no in-between.

We've been running our own product LiteLog in the security industry since 2020. The notes below come from real deployments, not slideware.

Where AI actually helps

Shift report drafting. Guards write hand-written shift reports during patrol. AI structures them into the format the dispatcher needs (incidents, actions, follow-up). Saves 10–15 minutes per shift, every shift.

Incident classification. Reports of unusual events are pre-classified (false alarm, customer issue, security incident, escalation needed). The dispatcher decides — but they don't read every line of every report from scratch.

Compliance documentation. §34a renewals, training certificates, equipment logs — most of this is structured paperwork that AI handles cleanly with pseudonymized data.

Where AI actively gets in the way

Live decisions. Real-time threat assessment is not an AI use case. The latency, the legal exposure and the pattern variance are all wrong. Humans on radio with a dispatcher remain the answer.

Customer-facing communication. Customers in KRITIS contexts (hospitals, infrastructure operators) expect named contacts, not AI replies. Even AI-drafted responses must be lawyer-approved and human-sent.

Replacing experienced staff. Tempting, doesn't work. Experienced security people read context that AI doesn't capture (familiar faces, regular patterns, "something feels off"). AI should free them from paperwork, not replace them.

A defensible architecture for the industry

For LiteLog and our consulting customers, we converged on:

  • Azure OpenAI in Frankfurt (Germany West Central).
  • Pseudonymization before every model call (guard names → IDs, customer locations → tokens).
  • Mandatory dispatcher review on every customer-facing output.
  • Audit log per shift, retained per the customer's KRITIS requirements.
  • §34a-compatible role separation in the access model.

This passes BSI audits and keeps customers comfortable. It doesn't preclude using AI productively — quite the opposite, it gives the staff confidence that they're inside the rules.

What's next

We're running pilots on incident-prediction (which incidents tend to occur on which routes at which times) — but that's predictive, not generative AI, and lives under the EU AI Act's "high risk" classification for critical infrastructure. So a different conversation entirely.

Weiterlesen