EU AI Act: What Every Enterprise Must Do Before August 2026
Date Published

Full EU AI Act enforcement begins 2 August 2026 (Article 113, Regulation 2024/1689). Fines reach €35M or 7% of global turnover for prohibited AI. A proposed extension to December 2027 failed in April 2026 trilogue negotiations. A Deloitte survey found 53.8% of German enterprises have implemented no concrete compliance measures. Complete AI inventory, risk classification, and High-Risk documentation are the three non-negotiable actions.
EU AI Act enforcement timeline — three phases from February 2025 through August 2026. Source: Regulation (EU) 2024/1689, Article 113.
EU AI Act Regulation 2024/1689 is binding law across all 27 EU member states. Article 113 sets 2 August 2026 as the date full enforcement begins — covering High-Risk AI systems, transparency obligations, and general-purpose AI model requirements. The prohibited practices provisions have already been enforceable since February 2025.
Three months out, most enterprises are behind. A Deloitte survey of German companies found that 53.8% had implemented no concrete compliance measures (Deloitte, November 2025), and only 26.2% had actively engaged with the regulation's requirements. Germany has some of the most compliance-aware corporates in Europe — the picture in other member states is worse.
A proposal called the Digital Omnibus floated deferring some High-Risk obligations to December 2027. The April 2026 trilogue negotiations on that proposal failed to reach agreement. August 2026 remains the operative date, and organisations that paused their compliance work pending that outcome are now critically short of runway.
“Organisations cannot wait until the last moment. The documentation requirements for High-Risk AI systems are substantive — they require evidence of testing, data governance, and human oversight controls that must be built and recorded over time, not generated retroactively.”
February 2025: Prohibited AI practices (Article 5) became enforceable. Any deployment touching these practices is already in breach.
August 2025: GPAI model obligations under Articles 51–56 took effect. Foundation model deployments require technical documentation and copyright policy compliance from this date.
2 August 2026: Full enforcement — High-Risk AI system requirements (Annex III), transparency obligations for Limited Risk systems, and all remaining provisions. This is the deadline most enterprises are working toward.
What Does the EU AI Act Actually Require from Enterprises?
The EU AI Act creates obligations in three distinct areas, and most compliance programmes stall because they conflate them. First: obligations around AI systems you deploy or use — classification, documentation, and governance. Second: obligations around AI systems you develop and sell — more relevant to technology vendors. Third: obligations around general-purpose AI models — relevant when your organisation uses foundation models via API as components of your own AI systems.
For a typical enterprise — one that uses AI tools built by others — the practical obligations centre on three things: knowing what AI you use, classifying each system by risk tier, and applying the compliance requirements that match each classification. None of this is technically complex. The complication is that 49% of employees admit to using unsanctioned AI tools at work (BlackFog, March 2025). Before you can classify and govern your AI systems, you need to know they exist.
That first step — complete, accurate AI system discovery — is where most compliance programmes fail. We cover this problem in detail in our analysis of shadow AI and the governance gap it creates inside enterprises.
The Five Risk Tiers — and What Each One Demands
The Act organises AI systems into five categories. Where a system sits determines what obligations apply. Under-classifying a High-Risk system as Minimal Risk saves compliance effort in the short term and creates direct regulatory exposure when enforcement begins. Over-classifying creates unnecessary documentation burden. Classification must be accurate and documented.
| Risk Tier | Examples | Key Obligation | Max Fine |
|---|---|---|---|
Prohibited | Real-time public biometric surveillance; social scoring; subliminal manipulation | Decommission immediately. Document decommissioning. | €35M or 7% turnover |
High-Risk | CV screening; credit decisioning; employee performance monitoring; insurance underwriting | Full conformity documentation: data governance, technical docs, human oversight, accuracy testing, EU AI database registration. | €15M or 3% turnover |
Limited Risk | Chatbots; virtual assistants; synthetic content generators; customer service bots | Disclose AI identity before or at the start of every interaction. Label AI-generated content. | €7.5M or 1% turnover |
GPAI Models | GPT-4o, Claude, Llama deployments; foundation model API integrations | Registry entry with provider documentation confirmed; copyright policy maintained. Systemic-risk models: adversarial testing + incident reporting. | €7.5M or 1% turnover |
Minimal Risk | Spam filters; content recommendations; AI-powered search; productivity assistants | Registry entry documenting classification decision. Voluntary codes of conduct encouraged. | No mandatory fine tier |
Prohibited Practices — What Is Already Illegal
Certain AI systems have been banned outright since February 2025, regardless of purpose, context, or safeguards applied. Article 99 sets the fine at €35 million or 7% of global annual turnover — whichever is higher. The categories are specific: real-time remote biometric identification in publicly accessible spaces (with narrow law enforcement exceptions); social scoring by public authorities; AI that exploits psychological vulnerabilities or uses subliminal techniques to manipulate behaviour; and predictive policing based on profiling alone.
The compliance action here is straightforward: verify no deployed system touches these categories. If one does, decommission it and document the decommissioning. There is no conformity pathway that makes a prohibited practice legal — the category exists because certain AI applications are considered incompatible with fundamental rights regardless of how well they are governed.
High-Risk Systems — Where Most Compliance Work Concentrates
Annex III defines eight domains where AI use is permitted but subject to full documentation and governance requirements before deployment. The fine for non-compliance is €15 million or 3% of global annual turnover. Several categories capture common enterprise AI uses that organisations may not have flagged as high-risk.
Any AI used in recruiting — CV screening, candidate ranking, interview scheduling tools that score applicants — is High-Risk under the employment domain. AI used in credit decisioning or insurance underwriting falls under essential services access. BlackFog's 2025 research found that 63% of employees believe using unapproved AI without IT oversight is acceptable if no approved alternative exists (BlackFog, March 2025) — which means High-Risk AI is being used for employment decisions in organisations that have no knowledge of it.
For each High-Risk system, the regulation requires: a maintained risk management system; data governance documentation covering training data quality, testing data, and bias assessment; technical documentation of how the system works; automatic logging sufficient for post-hoc output review; designed-in human oversight measures; accuracy and robustness testing results; and registration in the EU AI database before deployment. These requirements have substance — auditors will look at the underlying evidence, not just the presence of a document.
Limited Risk — The Disclosure Gap Most Enterprises Miss
Limited Risk systems are permitted but carry mandatory transparency obligations. Any AI system that interacts directly with people — chatbots, virtual assistants, AI-powered customer service — must disclose it is AI before or at the start of the interaction. Any system that generates synthetic content must label that content. The fine for failing to meet these obligations is €7.5 million or 1% of global annual turnover.
This tier is systematically underestimated. Every enterprise chatbot — internal HR bot, customer service bot, IT helpdesk assistant — sits here. If your employees or customers interact with AI that does not identify itself as AI, you have a Limited Risk compliance gap even if the system's actual outputs are entirely benign.
General Purpose AI — Foundation Model Deployer Obligations
Foundation models — the large language models that underpin most enterprise AI deployments — fall under the GPAI tier. If your organisation uses GPT-4o, Claude, Llama, Mistral, or any other foundation model via API and builds functionality on top of it, you have GPAI deployer obligations. The provider carries provider-level obligations. Your organisation carries deployer-level obligations: documenting the GPAI model in your AI system registry, confirming the provider's technical documentation satisfies the Act's disclosure requirements, and maintaining a copyright policy.
For models that cross the systemic risk threshold — trained with more than 10²⁵ floating-point operations, which captures the largest frontier models — additional requirements apply: adversarial testing, cybersecurity assessments, and incident reporting. Most enterprises are deployers rather than providers of these models, but the systemic risk provisions directly affect how you document and govern foundation model usage.
Why Shadow AI Changes Your Compliance Exposure
Shadow AI in the Enterprise — Employee Behaviour
Share of employees surveyed, BlackFog Research, March 2025
There is a persistent assumption in compliance planning that your AI inventory is approximately complete — the tools IT approved, the vendor platforms you signed contracts with, plus whatever engineering has built. The data suggests otherwise. BlackFog's 2025 research found that 60% of employees consider unsanctioned AI tools worth the security risk if they work faster, and 34% have shared sensitive financial or employee data through unapproved AI tools (BlackFog, March 2025).
The regulatory consequence of shadow AI is registry incompleteness. If your AI system registry does not include the tools your HR team is using for candidate screening — because that team adopted the tool without IT involvement — you have unclassified High-Risk AI systems in active operation. The EU AI Act does not provide a good-faith defence for systems you were not aware of. The obligation is to know, not to have been told.
Automated AI system discovery — scanning GitHub repositories, platform configurations, and organisational documents for AI system references — is the only scalable path to closing this gap before August 2026. Manual inventory processes catch what people report. Automated discovery catches what people actually use. We cover the full scope of this problem in our detailed post on shadow AI and the governance gap it creates.
What Does a Compliant AI System Registry Look Like?
The EU AI Act requires organisations to maintain a live registry of their AI systems — updated as systems are adopted, modified, or decommissioned. Regulators auditing for compliance will look for: system name; description of what it actually does (not the vendor's marketing description); provider; business domain; deployment date; risk classification; date of classification; and who made the classification decision.
For High-Risk and GPAI systems, the registry entry must link to or contain the conformity documentation: data governance records, technical description, human oversight procedures, and accuracy testing results. The workflow that makes this manageable is staged: discover, profile, classify, generate the conformity checklist for that tier, complete each item, and mark approved before use. An AI that pre-fills the classification questionnaire based on what each system actually does reduces the human review burden to confirmation rather than original research.
The same workflow — and what regulators will want to see in an evidence package — is covered in detail in our guide to building a regulator-ready AI system registry.
How to Classify an AI System in 15 Questions
The EU AI Act does not provide a single official classification questionnaire, but the EU AI Office has published guidance that makes the decision logic clear. The questions that determine classification fall into a consistent pattern:
- Does the system affect natural persons — does its output influence decisions about people?
- Is it deployed in one of the eight Annex III domains (employment, credit, education, biometrics, infrastructure, law enforcement, migration, justice)?
- Does it make or materially influence consequential decisions — credit approvals, employment decisions, educational assessments?
- Does it interact with users directly without identifying itself as AI?
- Does it process biometric data?
- Does it generate synthetic content — text, images, audio, video — that could be mistaken for human-generated output?
- Is it a general-purpose foundation model (via API), or a purpose-built model for a specific task?
- Does it make safety-critical decisions in infrastructure, healthcare, or public safety contexts?
- Was it built by your organisation, or deployed from a third-party vendor?
- Who is the intended user — internal employees, external customers, or regulated individuals?
A system that processes biometric data and makes employment-related decisions hits two separate High-Risk triggers independently. A system that does none of the above and produces no output that affects natural persons in a consequential way lands in Minimal Risk by default. The difficulty is not the classification logic — it is gathering accurate information about what each system does, particularly for tools adopted without IT involvement.
The step-by-step risk classification guide covers the full 15-question process for each tier.
With Three Months Remaining, What Should You Do First?
Immediately (May 2026): Run AI system discovery across your platform, codebase, and documents. Build a complete inventory — not just IT-approved tools. Log every discovered system regardless of perceived risk.
Within 4 weeks: Complete risk classification for every discovered system using the 15-question framework. Prioritise any system touching Annex III domains — employment, credit, biometrics, education, infrastructure, law enforcement.
By end of June: Conformity documentation for all High-Risk systems complete. GPAI registry entries with provider documentation confirmed. EU AI database registration submitted for applicable systems.
July 2026: Disclosure mechanisms for Limited Risk chatbots and assistants live. Full registry reviewed and signed off. Evidence package exportable for regulator review in CSV or JSON format.
2 August 2026: Full enforcement begins. All High-Risk systems must be documented, classified, and governed before this date.
The prioritisation logic matters because the available time is finite. An organisation that has not started cannot complete a comprehensive compliance programme for every AI system by August. The practical approach is triage: identify and classify all systems first to understand the actual risk landscape, then concentrate documentation effort on High-Risk systems and GPAI deployments. Minimal Risk systems require registry entries but not substantive documentation work.
The worst outcome is spending the available time building detailed documentation for Minimal Risk systems while High-Risk systems remain unclassified. Regulators evaluating compliance will look first at whether High-Risk systems were identified, classified, and governed before deployment. An incomplete Minimal Risk registry is a far smaller problem than an undocumented AI system used in recruitment decisions.
What Happens After 2 August 2026?
August 2026 is a compliance threshold, not a finish line. The EU AI Act is a continuous obligation — the registry must stay current, newly adopted AI systems must be classified before deployment, and High-Risk systems require ongoing monitoring and annual documentation review. National Market Surveillance Authorities in each member state are empowered to investigate and fine. Enforcement actions will likely target organisations with no visible preparation first, then move toward those with demonstrable governance gaps.
A live, maintained registry with current classifications and completed documentation is the most concrete evidence of good-faith compliance effort. For organisations that have not started, AI-assisted discovery and classification significantly compresses the timeline — what would take 2–4 weeks of manual consultant work can be reduced to days. The documentation work still takes time, but it starts from an accurate base rather than an incomplete one.
For the broader governance picture — what content policies, audit logging, and real-time enforcement look like beyond the registry — the AI governance versus compliance post covers why documentation alone is insufficient and what ongoing enforcement requires.
