EU AI Act Compliance Checklist: 15 Steps Before August 2026
This checklist breaks down EU AI Act compliance into 15 actionable steps. It covers what you need to do now, what can wait, and what tools can help accelerate each step.
Phase 1: Assessment (Start Now)
Inventory all AI systems in your organization
Create a complete register of every AI system you develop, deploy, or procure. Include third-party AI tools used by employees. Many organizations discover "shadow AI" during this step.
Classify each system by risk level
Determine whether each AI system is prohibited, high-risk (Annex III), limited-risk, or minimal-risk. This classification drives all subsequent compliance obligations. When in doubt, consult Annex III directly.
Determine your role: provider, deployer, or both
The AI Act distinguishes between providers (who develop AI systems) and deployers (who use them). Your obligations differ significantly based on your role. Many organizations are both.
Complete AI literacy training (already required)
As of February 2, 2025, all organizations must ensure staff working with AI have sufficient AI literacy (Article 4). This obligation is already in effect. Document your training efforts.
Conduct a gap analysis
Compare your current AI governance practices against the requirements in Articles 9-15. Identify where you need new processes, documentation, or tooling. This tells you how much work lies ahead.
Phase 2: Build Compliance Infrastructure (Q1-Q2 2026)
Implement a risk management system (Article 9)
Establish a continuous, iterative risk management process for each high-risk AI system. This must cover the entire lifecycle from design through deployment and monitoring.
Set up data governance (Article 10)
Ensure training, validation, and testing datasets meet quality requirements. Document data sources, collection methods, potential biases, and preprocessing steps.
Prepare technical documentation (Article 11 + Annex IV)
Create comprehensive documentation covering system design, development methodology, capabilities, limitations, testing results, and monitoring. Annex IV specifies exactly what must be included.
Implement automatic logging (Article 12)
High-risk AI systems must automatically record events throughout their lifecycle. Ensure your logging captures inputs, outputs, decisions, and system performance metrics.
Design transparency measures (Article 13)
High-risk AI systems must be transparent to deployers. Provide clear instructions for use, system capabilities, limitations, intended purpose, and known risks.
Phase 3: Operationalize (Q2-Q3 2026)
Establish human oversight mechanisms (Article 14)
Define who oversees each AI system, how they can intervene, and under what conditions. Implement ability to override, interrupt, or shut down the system.
Ensure accuracy, robustness, and cybersecurity (Article 15)
Validate AI system accuracy through appropriate testing. Implement resilience against errors, faults, and adversarial attacks. Apply cybersecurity measures proportionate to risk.
Set up a quality management system
Providers must implement a QMS covering AI system development, testing, and deployment. Consider ISO 42001 as a framework to build on.
Complete conformity assessment
For most high-risk systems, providers can self-assess using internal procedures. For biometric and critical infrastructure AI, third-party assessment by a notified body is required.
Register in the EU AI database and maintain ongoing compliance
Register your high-risk AI system in the EU database before placing it on the market. Establish post-market monitoring and incident reporting procedures. Compliance is ongoing, not one-time.
Need help choosing a compliance tool?
We compare 28 EU AI Act compliance platforms across categories, pricing, and coverage depth. Find the right tool for your organization.
Stay ahead of the AI Act deadline
Get compliance updates, new tool listings, and practical guides delivered to your inbox. No spam, unsubscribe anytime.
Join compliance professionals preparing for August 2026.