EU AI Act Risk Classification: A Step-by-Step Guide for Compliance Teams
Date Published

The EU AI Act creates five risk tiers with fines from €7.5M to €35M. Classification is the most consequential compliance task — get it wrong and you either miss obligations or waste resources on requirements that do not apply. 53.8% of German enterprises have implemented no concrete compliance measures (Deloitte, November 2025). Full enforcement begins 2 August 2026. The five-step process in this guide covers every system — from prohibited practices to Minimal Risk productivity tools. When uncertain, default to High-Risk: the documentation burden is lower than the regulatory exposure from misclassification.
The most consequential task in EU AI Act compliance is classification. Get it wrong and you either under-comply — missing obligations you should have met — or over-comply, spending resources on documentation requirements that do not apply. Either error is expensive. An unclassified High-Risk AI system operating without the required conformity documentation carries a fine of up to €15 million or 3% of global annual turnover under Article 99 of Regulation (EU) 2024/1689.
A Deloitte survey of German enterprises found that 53.8% had implemented no concrete compliance measures as of November 2025, with only 26.2% actively engaging with the regulation's requirements (Deloitte, November 2025). Germany has some of the most compliance-aware corporates in Europe — the picture elsewhere is worse, and the August 2026 deadline is now less than three months away.
This guide walks through the five-step classification process for each risk tier — the key questions, the specific domains covered, and what the classification outcome requires next. For the broader compliance picture — timeline, enforcement context, and the shadow AI problem that makes inventory discovery the necessary first step — see our guide to what every enterprise must do before the August 2026 deadline.
“Correct risk classification is the foundation of AI Act compliance. An organisation that has classified its systems accurately — even if documentation is still in progress — is demonstrably further along than one that has not. Classification is the task that unlocks every subsequent compliance action.”
What Qualifies as an "AI System" Under the Act?
The EU AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from input how to generate outputs — predictions, content, recommendations, or decisions — that can influence real or virtual environments.
This definition is intentionally broad. It includes large language models used for text generation, recommender systems, scoring models used in HR or credit decisioning, and any software that uses machine learning to generate outputs that influence real-world decisions. It does not include simple rule-based systems with no learning component — a decision tree with fixed thresholds is not an AI system under the Act. When in doubt, treat the system as in scope and let the classification process determine its tier.
Step 1 — Check for Prohibited Practices (Article 5)
Before classifying by risk level, verify whether any AI system in your inventory falls under Article 5 — the absolute prohibitions. These practices are banned regardless of context, purpose, or safeguards applied. There is no conformity documentation that makes a prohibited practice compliant. The fine is up to €35 million or 7% of global annual turnover — the highest in the Act.
Any AI system that performs any of the following is prohibited and must be decommissioned:
- Real-time remote biometric identification of natural persons in publicly accessible spaces (with narrow, authorised law enforcement exceptions)
- Social scoring of individuals by public authorities based on behaviour or personal characteristics
- AI that exploits psychological vulnerabilities of specific groups — based on age, disability, or socioeconomic situation — to manipulate behaviour in ways that cause harm
- Subliminal techniques that affect behaviour without the person's conscious awareness, causing harm
- Predictive policing of individuals based on profiling alone, without a reasonable basis grounded in objective, verifiable facts
- Biometric categorisation systems that infer sensitive characteristics (political opinions, religious beliefs, sexual orientation) from biometric data
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
If a system touches any of these practices, decommission it. Document the decommissioning date and rationale. Do not attempt to re-architect the system to minimise the prohibited function — the category prohibits the purpose, not a specific technical implementation.
Step 2 — Identify High-Risk Systems (Annex III)
Annex III defines eight domains where AI is permitted but subject to full compliance obligations before deployment. For each AI system in your inventory, the primary question is: does this system operate in one of these eight domains? If yes, it is High-Risk regardless of the system's sophistication, the size of the organisation, or whether it was commercially procured or built internally.
| Annex III Domain | Common Enterprise Examples | Why High-Risk |
|---|---|---|
1. Biometric identification & categorisation | Facial recognition for building access; emotion recognition in hiring interviews; biometric time-and-attendance systems | Affects natural persons based on physical or behavioural characteristics |
2. Critical infrastructure | AI in energy grid management; water system automation; transport safety systems; financial market infrastructure | Failure could disrupt or endanger critical services for large populations |
3. Education & vocational training | Automated essay grading; admission decision tools; exam proctoring AI; learning pathway recommendation | Determines access to educational opportunities and credentials |
4. Employment & HR | CV and résumé screening tools; interview scoring AI; performance management systems; promotion and termination decision support | Affects workers' access to employment and advancement — directly consequential |
5. Essential services access | Credit scoring models; insurance underwriting AI; mortgage approval systems; social benefit eligibility tools | Determines access to financial resources and social support |
6. Law enforcement | Risk assessment tools for recidivism; lie detection AI; predictive crime analysis beyond individual profiling | Affects fundamental rights and personal liberty |
7. Migration, asylum & border control | Automated document verification; risk profiling for border checks; asylum case processing AI | Affects fundamental rights in irreversible high-stakes decisions |
8. Justice & democratic processes | AI assisting in judicial decisions or sentence recommendations; electoral campaign targeting AI; voter analysis tools | Affects rule of law and democratic participation |
The employment domain is the most commonly misclassified in enterprise compliance programmes. Any AI used in CV screening, candidate ranking, interview assessment, employee performance monitoring, or promotion and termination decision support is High-Risk — even if the tool was purchased as a productivity feature from an HR software vendor, even if a human makes the final decision, and even if the AI's output is framed as a "recommendation" rather than a decision. The classification criteria is the domain and impact of the output, not the vendor's framing of the product.
When uncertain whether a system falls within an Annex III domain, default to High-Risk. The documentation burden of treating a borderline system as High-Risk is manageable. The regulatory exposure of misclassifying an employment AI as Minimal Risk and operating it without conformity documentation is not.
- Risk management system — documented and maintained throughout the system's operational lifecycle
- Data governance — documentation of training data, testing data, data quality measures, and bias assessment
- Technical documentation — model design, architecture, training methodology, and performance characteristics
- Automatic logging — event and decision logging sufficient to enable post-hoc review of every output
- Human oversight — measures designed in before deployment; humans must be able to understand, monitor, and override the system
- Accuracy, robustness & cybersecurity testing — completed and documented before deployment
- EU AI database registration — system registered in the EU's public AI database before it is put into service
Step 3 — Identify Limited Risk Systems (Articles 50 and 52)
Limited Risk covers AI systems that interact directly with people or generate synthetic content. The primary obligation is transparency — users must know they are interacting with AI. Fine for non-compliance: up to €7.5 million or 1% of global annual turnover.
Every enterprise chatbot is Limited Risk by default. If it interacts with users — employees, customers, or partners — it must disclose that it is AI before or at the start of the interaction. This applies whether the chatbot is internal-facing (HR queries, IT helpdesk) or external-facing (customer service), and regardless of how sophisticated or simple the bot's underlying technology is.
Synthetic content also falls under Limited Risk transparency obligations. AI-generated text presented as human-written, deepfakes, synthetic media, and AI-generated images used in communications must be labelled appropriately. The disclosure requirement applies to the content itself, not just to the system that produced it.
Step 4 — Classify GPAI Models (Articles 51–56)
General Purpose AI covers foundation models — LLMs and other large models designed for broad task ranges rather than a specific deployment. If your organisation uses foundation models via API (OpenAI, Anthropic, Google, Meta, Mistral, or any other provider), you have GPAI deployer obligations distinct from the provider's obligations.
- Registry entry — document the GPAI model in your AI system registry with provider name, model version, and intended use
- Provider documentation — confirm the provider's technical documentation satisfies the Act's disclosure requirements; obtain and retain a copy
- Copyright policy — document a policy for training data copyright compliance
- Systemic-risk models — for models trained with more than 10²⁵ floating-point operations (which includes GPT-4o, Claude 3 Opus, and the largest frontier models): verify that your provider has completed adversarial testing, has cybersecurity assessment documentation, and has established incident reporting procedures
If you are using GPAI models via a vendor API, the provider handles their GPAI provider-level obligations. Your organisation's obligation as a deployer is to document and register — not to re-create the provider's technical documentation from scratch.
Step 5 — Document Minimal Risk Systems
The majority of productivity AI tools — spam filters, content recommendation engines, AI writing assistance, AI-powered search — fall into Minimal Risk. There are no mandatory compliance requirements. Voluntary codes of conduct apply.
"No mandatory requirements" does not mean "skip the registry entry." Every AI system must appear in the registry, including a brief classification rationale documenting why it was assessed as Minimal Risk. A blank registry entry is not compliant even for Minimal Risk systems — the Act's documentation requirements cover all deployed AI systems, and an auditor who finds an AI system with no registry entry cannot distinguish an intentionally minimal-risk classification from a system that was simply never assessed.
Classification decision flow based on Regulation (EU) 2024/1689, Articles 5, 6, 50, 51–56; Annex III. Source: EU AI Act, Official Journal of the EU.
The 15-Question Classification Questionnaire
Working through the classification process for every AI system in an organisation requires a consistent set of questions. The EU AI Office has published compliance guidance that makes the decision logic explicit. The questions that determine classification fall into a consistent pattern across all five tiers:
- Does the system affect natural persons — does its output influence decisions about people?
- Is it deployed in one of the eight Annex III domains?
- Does it make or materially influence consequential decisions about individuals — credit, employment, education?
- Does it interact with users directly, without identifying itself as AI?
- Does it process biometric data?
- Does it generate content that could be mistaken for human-generated output?
- Is it a general-purpose foundation model, or a purpose-built model for a specific task?
- Does it operate in a regulated sector — financial services, healthcare, critical infrastructure?
- Was it procured through formal IT channels, or adopted by employees without IT review?
- Who is the intended user — internal employees, external customers, or regulated individuals?
- Does it make safety-critical decisions in infrastructure or public safety contexts?
- Does it use biometric categorisation to infer sensitive personal characteristics?
- Is there a human in the loop who can review and override every output before it takes effect?
- Is it a third-party vendor product, a self-built system, or an API integration?
- Does the output of this system influence the output of a higher-risk system downstream?
For systems discovered through automated AI discovery scanning, an LLM can pre-fill all 15 answers based on what the system actually does — reducing classification time from hours to minutes per system. The compliance officer reviews and confirms rather than answering from scratch. This is the only approach that scales across a large AI estate under the August 2026 time constraint. Manual classification of 50+ systems, one by one from zero, is not feasible in the time remaining.
What to Do Immediately After Classification
| Classification | Immediate Next Steps | Deadline |
|---|---|---|
Prohibited | Decommission immediately. Document decommission date, rationale, and confirmation that the system is no longer operational. | Immediate — already in breach since February 2025 |
High-Risk | Generate conformity checklist. Begin documentation of all 7 required elements. Do not deploy until documentation is complete. Register in EU AI database. | Before 2 August 2026 for existing deployments |
Limited Risk | Implement AI disclosure mechanism at point of interaction. Document the disclosure implementation. Add to registry. | Before 2 August 2026 |
GPAI | Obtain and retain provider technical documentation. Document copyright policy. Register in registry. Verify systemic-risk status. | GPAI obligations active since August 2025 |
Minimal Risk | Add to registry with classification rationale. No documentation package required. Schedule annual review. | Before 2 August 2026 — registry must be complete |
For organisations that have not yet built their AI system inventory — the prerequisite for classification — our guide to shadow AI and the enterprise governance gap covers why standard IT discovery methods miss most AI deployments and what automated discovery requires. For the full registry structure — what a regulator-ready AI system registry contains and what auditors look for — see our guide to building a regulator-ready AI system registry.
