Guide March 13, 2026 10 min read

EU AI Act Risk Classification: Is Your AI System High-Risk?

The EU AI Act uses a risk-based framework with four tiers. Your compliance obligations depend entirely on which tier your AI system falls into. Getting this classification right is the single most important step in AI Act compliance.

The Four Risk Tiers

Tier 1: Prohibited AI Practices (Article 5)

Banned outright since February 2, 2025. Penalties up to EUR 35 million or 7% of annual worldwide turnover.

  • X Subliminal manipulation causing harm
  • X Exploitation of vulnerable groups
  • X Social scoring by public authorities
  • X Real-time remote biometric identification in public spaces (with limited exceptions)
  • X Emotion recognition in workplaces and schools
  • X Untargeted facial recognition scraping

Tier 2: High-Risk AI Systems (Annex III)

Full compliance required by August 2, 2026. Must meet Articles 9-15, conformity assessment, EU database registration, and post-market monitoring.

Biometrics (where permitted): Remote biometric identification, emotion recognition, biometric categorization
Critical infrastructure: Safety components of road traffic, water, gas, heating, electricity supply, digital infrastructure
Education: Admissions scoring, learning outcome assessment, examination proctoring
Employment: CV screening, candidate ranking, interview analysis, performance evaluation, promotion/termination decisions
Essential services: Credit scoring, emergency services dispatch, insurance risk assessment
Law enforcement: Risk assessment of individuals, polygraphs, evidence reliability, profiling
Migration: Risk assessment, document authenticity verification, asylum application assessment
Justice: Research and interpretation of facts and law, application of law to facts

Tier 3: Limited Risk (Transparency Obligations)

Must clearly inform users they are interacting with AI. Lower compliance burden.

  • Chatbots and conversational AI (must disclose AI nature)
  • AI-generated content including deepfakes (must be labeled)
  • Emotion recognition systems not in high-risk categories
  • Biometric categorization not in high-risk categories

Tier 4: Minimal Risk

No specific AI Act obligations beyond general AI literacy (Article 4). Encouraged to adopt voluntary codes of conduct.

  • AI-powered spam filters
  • AI in video games
  • Inventory management AI
  • Content recommendation algorithms (non-manipulative)

How to Classify Your AI System

Step 1: Check the prohibited list first

Review Article 5 carefully. If your system falls here, it must be discontinued immediately. This has been enforceable since February 2025.

Step 2: Check Annex III categories

Review all eight high-risk categories in Annex III. If your AI system's intended purpose falls within any of these categories, it is high-risk.

Step 3: Check Annex I (safety components)

If your AI system is a safety component of a product already covered by EU product safety legislation (Annex I), it is high-risk regardless of Annex III.

Step 4: Consider the exception clause

Even if your system matches an Annex III category, it may not be high-risk if it does not pose a significant risk of harm. However, this exception is narrow and must be documented.

Step 5: Check transparency obligations

If your system interacts directly with people (chatbots), generates content (deepfakes), or detects emotions/biometrics, it likely has transparency obligations regardless of risk tier.

Need a tool to help classify your AI systems?

Several platforms offer automated risk classification aligned with the EU AI Act. Compare them in our directory.

Stay ahead of the AI Act deadline

Get compliance updates, new tool listings, and practical guides delivered to your inbox. No spam, unsubscribe anytime.

Join compliance professionals preparing for August 2026.