Sphere Partners
Govarix

Building an AI System Registry That Actually Satisfies Regulators

Date Published

Building an AI System Registry That Actually Satisfies Regulators — hero image
TL;DR

The EU AI Act requires a live AI system registry covering every AI system the organisation uses — including shadow AI. A manual IT-sourced inventory typically finds 30–50% of what automated LLM Discovery finds. Compliant entries require 11 fields for all systems and an additional conformity checklist for High-Risk and GPAI systems. The five-stage workflow runs: Discovered → Profile Done → Classified → Documented → Approved. The regulator-ready export is a structured CSV or JSON file producible on demand. Enforcement begins August 2, 2026. Maximum penalty: €35M or 7% of global turnover.

€35M
EU AI Act maximum penalty for prohibited practice violations — 7% of global annual turnover if higher. A live registry is a core enforcement evidence requirement.
2–3×
More AI systems found by automated LLM Discovery versus manual IT inventory — shadow AI, SaaS-embedded AI, and developer tools are systematically missed by manual processes
Aug 2026
EU AI Act full enforcement deadline — regulators gain authority to request registry documentation, inspect systems, and impose penalties from this date
5 stages
Discovered → Profile Done → Classified → Documented → Approved — the complete compliance lifecycle tracked per AI system in the registry

When an EU AI Act regulator asks to see your AI system registry, you need to produce an accurate, current record of every AI system your organisation uses — with classification, documentation status, and compliance evidence for each. A PDF report dated six months ago, covering only the AI tools IT approved through procurement, will not satisfy the requirement.

The EU AI Act registry is not a one-time compliance artefact. It is a live document. Systems are added as they are adopted. Statuses change as documentation progresses. Risk levels are updated when system purposes change. Enforcement authorities have the right under Article 74 to request registry access at any time — and the burden of proof that a system has been properly assessed falls on the deployer, not the regulator.

What the EU AI Act Actually Requires

The EU AI Act does not specify a registry format, but the compliance documentation requirements for High-Risk systems define what information must be available and auditable. Articles 9, 11, 13, 14, and 17 collectively require deployers to maintain documentation that covers:

  • A complete inventory of AI systems in use, with risk classification for each
  • Technical documentation for High-Risk systems — training data, architecture, accuracy assessments, known limitations
  • Human oversight procedures — how a human can intervene in or override AI outputs
  • Post-market monitoring records — ongoing accuracy and performance tracking
  • Incident and malfunction logs for High-Risk systems
  • Registration in the EU AI database for High-Risk systems in certain categories

The registry is the index that connects all of this documentation to specific systems. Without a maintained registry, none of the underlying documentation is findable or auditable under time pressure. When an enforcement officer arrives, the registry is the first document they will request.

Official Guidance

"The first question any national competent authority will ask an organisation subject to the EU AI Act is: show me your inventory of AI systems and their risk classifications. Organisations that cannot answer this question immediately are demonstrating that their governance programme exists on paper only. The registry is not a compliance output — it is the foundation on which all other compliance work rests."

— EU AI Office, Implementation Guidance for Deployers of AI Systems, European Commission, Q4 2025. The EU AI Office published implementation guidance specifying that a live system inventory and classification record is the first requirement enforcement authorities will verify.

What a Compliant Registry Entry Must Contain

A complete registry entry covers eleven fields that span every EU AI Act compliance tier:

FieldRequired ForWhat Regulators Use It For

System name and description

All systems

Identifying what the system does — not just its commercial name

Provider / vendor

All systems

Establishing the supply chain and third-party risk scope

Deployment domain

All systems

Mapping to Annex III High-Risk categories by use context

Risk classification

All systems

Verifying the correct tier was assigned (Prohibited / High / Limited / Minimal / GPAI)

Classification date

All systems

Establishing when the assessment was performed and whether it is current

User population and use case

High Risk, GPAI

Assessing scope of impact and whether oversight requirements are proportionate

15-question assessment answers

High Risk, GPAI

Auditing the classification rationale — was the right tier assigned for the right reasons?

Compliance obligations checklist

High Risk

Verifying each specific Article 9–17 obligation has been addressed with evidence

Technical documentation reference

High Risk

Locating the stored documentation for detailed review

EU AI database registration status

High Risk (certain categories)

Confirming mandatory registration has been completed

Current workflow status

All systems

Understanding where in the compliance lifecycle the system sits

The Shadow AI Problem: Why Manual Inventories Fail

A manual registry building process starts with what IT knows about and works outward. The tools that went through procurement, the licences in the SaaS catalogue, the APIs the engineering team requested access to — these are findable by asking IT. Everything else is invisible.

The invisible portion is substantial. Shadow AI — personal ChatGPT, Copilot, Gemini, and Perplexity subscriptions used for work — accounts for 63% of enterprise AI usage according to BlackFog research (January 2026). AI capabilities embedded in approved SaaS platforms (Notion AI, Salesforce Einstein, HubSpot AI, Slack AI) are in the environment without anyone having specifically adopted them as AI systems. AI in developer codebases — GitHub Copilot integrations, LLM API calls in internal tools, model-backed scripts — are invisible to non-technical compliance teams.

A manual inventory process catches the tip of the iceberg and files it as the whole picture. The EU AI Act registry obligation covers all AI systems the organisation deploys or uses — including systems nobody explicitly decided to adopt. For a full analysis of the shadow AI problem and how to govern it, see our guide to the shadow AI governance gap.

The Five-Stage Compliance Workflow

Every AI system in the registry moves through five stages. The stage tracks compliance progress at the individual system level — so an organisation can see at a glance which systems are fully documented, which are in progress, and which have not yet been assessed.

Stage 1 — Discovered: The system has been identified and added to the registry through LLM Discovery, the pre-loaded catalog, or manual entry. The system exists in the registry with basic information — vendor, name, initial description. Its profile has not yet been completed.

Stage 2 — Profile Done: The 15-question EU AI Act classification questionnaire has been answered for this system. LLM Discovery pre-fills all 15 answers based on what the system actually does — its purpose, use context, affected persons, decision-making role, and data inputs. A human reviewer confirms or corrects the pre-filled answers. Profile Done means the organisation understands what the system does and has the information needed to classify it accurately.

Stage 3 — Classified: The risk tier has been assigned: Prohibited, High Risk, Limited Risk, Minimal Risk, or GPAI. The classification is generated from the questionnaire answers against the 15-point decision tree aligned with Annex III categories. Once classified, the compliance obligations for that tier are generated automatically — the specific documentation, testing, oversight, and registration requirements that apply to this system. For a guide to the classification logic itself, see our EU AI Act risk classification guide.

Stage 4 — Documented: For High-Risk and GPAI systems, the conformity obligations checklist is being completed. Each obligation is tracked individually with a completion percentage. The compliance team can see exactly what is complete, what is outstanding, and what documentation has been attached. Vendor documentation is linked. Domain mapping is finalised. Human oversight procedures are written.

Stage 5 — Approved: All compliance obligations for the system's risk tier are satisfied. The system is approved for continued use. The full evidence package — registry entry, questionnaire answers, conformity checklist with completion evidence, and classification rationale — is exportable as CSV or JSON for regulatory review.

Keeping the Registry Live

The EU AI Act does not specify an update frequency, but the intent of "live" is clear: the registry must reflect the organisation's current AI posture, not its posture at the time of the last annual compliance review. Practical maintenance cadence:

  • New AI systems: Added within 30 days of adoption — whether through formal procurement or discovered through shadow AI governance channels
  • Classification reviews: Triggered when a system's purpose changes materially — an AI tool adopted for administrative tasks that begins influencing hiring decisions may change risk tier
  • Checklist updates: Reviewed quarterly — particularly for High-Risk systems where post-market monitoring obligations require ongoing documentation
  • Full discovery re-scan: Semi-annual — catching newly adopted tools, AI features added to existing SaaS platforms, and new LLM integrations in developer codebases

Govarix's pre-loaded system catalog makes ongoing additions straightforward. When a new common enterprise AI tool is released, it is added to the catalog with a pre-filled profile. Adding it to your registry is one click. LLM Discovery can be re-run against an updated codebase or document set to catch tools adopted since the last scan.

What Happens During a Regulatory Inspection

EU AI Act enforcement is conducted by national competent authorities designated by member states. The enforcement process under Articles 74–78 gives authorities the right to request documentation, access premises, inspect AI systems, and demand evidence of compliance.

Based on the EU AI Office's published guidance, a typical first-contact inspection sequence is:

  1. Request for the organisation's AI system inventory and risk classifications
  2. Review of High-Risk system technical documentation for one or two systems selected by the authority
  3. Verification that human oversight procedures exist and are operationalised — not just documented
  4. Evidence of post-market monitoring for High-Risk systems
  5. Verification of EU AI database registration for applicable High-Risk systems

An organisation with a complete Govarix registry produces the regulator-ready export at step one — a structured file with every system, its risk level, classification date, compliance status, vendor, and documentation status. The authority can identify which High-Risk systems to inspect at step two, and the full conformity checklist with attached evidence for those systems is immediately accessible.

An organisation with a manual spreadsheet registry spends days reconstructing what should be instantly available. An organisation with no registry has no first-line response to the inspection's opening request — and that absence is itself evidence of a governance programme failure.

Registry and Governance Layer Integration

A registry that is separate from the AI governance platform is incomplete governance. The registry tells you what AI systems exist and what risk tier they belong to. The governance platform tells you what controls are applied to those systems in real time.

In Govarix, the registry and the governance layer are integrated: the risk classification assigned to an AI system in the registry determines which content policy templates are applied to it, which access controls govern who can use it, and which audit log fields are captured for interactions with it. A High-Risk system classification automatically triggers stricter governance controls than a Limited Risk classification — without manual reconfiguration by the compliance team.

This integration is the model the EU AI Act's risk-based approach was designed to incentivise: classification drives proportionate controls, controls generate audit evidence, and evidence supports the registry's documented compliance status. For a guide to the full relationship between governance controls and compliance documentation, see our analysis of AI governance vs AI compliance. For what the resulting audit trail must contain to serve as regulatory evidence, see our guide to AI audit logs as compliance evidence.

Frequently Asked Questions

What must an EU AI Act AI system registry contain?
Eleven fields for all systems: name and description, provider/vendor, deployment domain, risk classification, classification date, current workflow status. Additional fields for High-Risk and GPAI: user population and use case, 15-question assessment answers, compliance obligations checklist, technical documentation reference, and EU AI database registration status.

Is a spreadsheet registry sufficient for EU AI Act compliance?
No. The EU AI Act requires a live, maintained registry that can be produced on demand for regulatory inspection. A spreadsheet cannot generate a structured regulator-ready export, track per-obligation completion status across multiple systems, or run automated discovery to catch shadow AI systems outside IT's visibility.

Does the EU AI Act registry need to include shadow AI tools?
Yes. The obligation covers all AI systems the organisation deploys or uses — including shadow AI tools accessed via personal accounts, AI embedded in approved SaaS platforms, and AI in developer codebases. Automated LLM Discovery typically finds 2–3× as many systems as a manual IT-sourced inventory for the same organisation.

When must the EU AI Act AI system registry be ready?
Full enforcement begins August 2, 2026. Regulators have authority to request the registry from that date. Organisations that cannot produce a current, complete registry on request face enforcement exposure regardless of their other compliance activities.

How do you keep an AI system registry current?
New systems added within 30 days of adoption; risk classifications reviewed when system purpose changes materially; compliance checklists updated quarterly; full discovery re-scan semi-annually. A pre-loaded tool catalog makes new additions a one-click operation for common enterprise AI tools.

What does a regulator-ready AI registry export contain?
Every registered system with risk level, classification date, compliance status, vendor, deployment domain, documentation status, EU AI database registration status, and full 15-question questionnaire answers for High-Risk and GPAI systems. Structured as CSV or JSON for regulatory review without manual formatting.

Let'sConnect

Trusted by

WIZCOAutomation AnywhereAppianUiPath
Luke Suneja

Flexible, fast, and focused — let's solve your tech challenges together.

Luke Suneja

Client Partner

What can we help you with?