Sphere Partners
Govarix

Shadow AI: The Enterprise Governance Gap That Regulators Are Coming For

Date Published

Shadow AI: The Enterprise Governance Gap That Regulators Are Coming For — hero image
TL;DR — Key Takeaways

70% of enterprise AI operates outside IT oversight (Lenovo, April 2026). 49% of employees admit using unsanctioned AI tools at work (BlackFog, January 2026). Under the EU AI Act (August 2026), an incomplete AI inventory is a compliance breach — not just a security gap. Shadow AI breaches cost $670,000 more than standard breaches (IBM, 2025). Standard IT discovery methods cannot find most shadow AI. The fix requires scanning codebases, configurations, and documents — not asking employees to self-report.

70%
of enterprise AI operates outside IT oversight (Lenovo, April 2026)
$4.63M
average cost of a shadow AI data breach (IBM, 2025)
22%
of paste operations into AI tools contain PII or payment card data (LayerX, 2025)
40%
of firms will suffer a shadow AI compliance incident by 2030 (Gartner, 2025)

In early 2023, three Samsung semiconductor engineers used ChatGPT to help with their work. One pasted proprietary source code to fix a bug. Another transcribed internal meeting notes. A third used it to optimise test sequences. None did anything unusual by the standards of how employees use AI today — they found a tool that helped them work faster and used it. Samsung banned generative AI across company devices within weeks, but the proprietary data had already been processed by OpenAI's systems.

Samsung's response — a blanket ban — was a short-term reaction to a structural problem. Employees adopt AI tools that make their work better, and they do it faster than governance frameworks can track. Lenovo's April 2026 research, drawn from 6,000 enterprise employees, put the share of enterprise AI operating outside IT oversight at 70%. A blanket ban does not change that number — it just makes employees less likely to disclose what they are using.

For most of the past three years, this was primarily a security and data management concern. The EU AI Act changes the category entirely. From August 2026, an incomplete AI system inventory is not just an IT governance gap — it is a compliance violation under binding European law. We cover the full August 2026 enforcement picture in our guide to what every enterprise must do before the EU AI Act deadline.

Shadow AI is not just a security problem — it is a governance problem with direct legal consequences. Organisations that cannot demonstrate a complete, classified inventory of their AI systems will not be able to demonstrate compliance with the AI Act. The two problems are inseparable.
Gartner — Managing the Risks of Shadow AI in the Enterprise, Gartner Research, October 2025. (Coverage: Infosecurity Magazine, October 2025)

What Does Shadow AI Actually Look Like Inside an Enterprise?

The term calls to mind employees sneaking around IT policy, but the reality is more mundane. Most shadow AI use is motivated by productivity, adopted openly by people who simply did not think to ask for approval, and concentrated in a handful of tools that have become as normalised as Google Docs.

LayerX's 2025 Enterprise AI and SaaS Data Security Report, which analysed telemetry from dozens of global enterprises, found that ChatGPT accounts for over 90% of unsanctioned GenAI usage at companies where it is not the approved tool — with Google Gemini at roughly 15% and Claude at 5%. Eighty-two percent of paste operations into generative AI tools come from personal, unmanaged accounts.

The data these tools receive is not innocuous. LayerX found that 77% of enterprise AI users regularly copy and paste data into chatbots, 22% of those operations contain PII or payment card information, and 40% of file uploads to generative AI platforms include PII or PCI data (LayerX, 2025). BlackFog's survey found that 33% of employees using unsanctioned tools had shared research data or datasets, 27% had shared employee data including salaries and performance records, and 23% had inputted company financial statements (BlackFog, January 2026).

What Corporate Data Employees Share Via Unsanctioned AI

Share of employees who admitted sharing each category — BlackFog Research, January 2026 (n=2,000)

Research data or internal datasets33%
Employee data including salaries and performance records27%
Company financial statements or sales figures23%
Paste operations containing PII or payment card data (LayerX)22%

Heavy users — those pasting into AI tools multiple times per day — average 6.8 paste operations daily, with more than half containing corporate information (LayerX, 2025). These are not rare events or deliberate exfiltration attempts. They are the normal working patterns of people who have integrated AI into how they get things done.

Why Is the Shadow AI Governance Gap Getting Wider, Not Smaller?

IT departments have known about shadow AI for several years and have largely failed to contain it. Komprise's 2025 survey of 200 IT directors at US enterprises with over 1,000 employees found that 90% are concerned about shadow AI from a privacy and security standpoint, and 79% have already experienced negative outcomes from employees sending corporate data to AI systems (Komprise, 2025). Thirteen percent reported financial, customer, or reputational damage as a direct result.

Gartner research published in late 2025 found that 69% of cybersecurity leaders have evidence — or at least suspect — that employees are using public generative AI without authorisation. Gartner warned that 40% of firms will suffer a security or compliance incident by 2030 specifically attributable to unsanctioned AI tools (Gartner, October 2025). The analysts also flagged that unmanaged AI usage generates technical debt that 50% of enterprises will eventually need to address.

Part of why containment has failed is that the tools have become genuinely invisible to traditional IT discovery methods. Browser extensions, personal API calls embedded in developer code, AI features added to approved SaaS platforms through vendor update cycles — none of these appear in network traffic analysis or application whitelisting the way a new software installation does. The Register documented in October 2025 how employees routinely paste company secrets into ChatGPT with no awareness that their actions create a discoverable data trail. LayerX found that 20% of enterprise users have installed AI-enabled browser extensions, 58% of which carry high or critical permission levels, and 5.6% of which were classified as outright malicious (LayerX, 2025).

How Does the EU AI Act Make Shadow AI a Legal Liability?

Before the EU AI Act, shadow AI consequences were security incidents, data exposure, and IT policy violations — serious, but manageable through detection and response. The EU AI Act introduces a structurally different consequence: an incomplete AI system inventory is a compliance violation regardless of whether the organisation knew the tools existed.

Article 6 of the EU AI Act establishes classification requirements for AI systems — but classification requires discovery first. You cannot classify a system you do not know about. The regulation provides no exemption for systems adopted without IT knowledge. If an employee has been using an AI tool for HR-related tasks — screening CVs, assessing performance, scheduling interviews — that tool is a High-Risk AI system under Annex III, and the organisation's obligation to classify and govern it applies from the moment of use, not from when IT discovers it.

The IBM 2025 Cost of a Data Breach Report found that shadow AI breaches cost organisations an average of $4.63 million — $670,000 more than the $3.96 million average for standard breaches (IBM, 2025). Sixty-five percent of shadow AI breaches involved compromised customer PII, compared to 53% for breaches generally. Under the EU AI Act, that financial exposure now also includes regulatory fines of up to €15 million or 3% of global annual turnover for each unclassified High-Risk system.

Why Standard IT Discovery Methods Cannot Find Shadow AI

Discovery MethodWhat It FindsWhat It MissesCoverage

Network traffic analysis

AI tools accessed from corporate devices on corporate networks

Personal devices; personal accounts on corporate devices; API calls embedded in application code; BYOD

Low–Medium

Application whitelisting

Installed software that was not approved

Web-based tools; personal API subscriptions; AI features added to already-installed applications through vendor updates

Low

Employee self-reporting

Tools employees remember and feel comfortable disclosing

Tools employees expect IT would block; tools used by senior staff who bypass process; AI features employees do not identify as "AI"

Very Low

Browser extension audits

AI-enabled extensions on managed devices; extension permission levels

Personal device usage; extensions installed and removed; non-browser AI integrations

Medium

Codebase + document scanning (LLM-assisted)

AI library dependencies in code; API client integrations; AI tool references in documents, config files, and communications; systems described by what they do rather than what they are called

Purely verbal/informal usage with no digital trace

High

Network traffic analysis catches AI tool usage from corporate devices on corporate networks. It misses personal devices used for work, personal accounts accessed from corporate devices through browser sessions, and API calls embedded in application code that look like any other HTTPS request.

Application whitelisting manages what software is installed. It has no visibility into web-based tools accessed through a browser, personal API subscriptions, or AI capabilities added to already-installed applications through vendor update cycles — which is where most shadow AI lives. Employee self-reporting produces inventories that reflect what people remember and feel comfortable disclosing. BlackFog found that 69% of C-level respondents prioritise speed over security when using AI (BlackFog, January 2026), which means the executives who should be modelling compliance behaviour are among the least likely to self-report.

Effective discovery requires reading the organisation's own artefacts — codebases, documents, and configuration files — to identify AI systems from what they do rather than what they are called. This surfaces AI systems that no manual inventory process would find and builds the EU AI Act registry from a complete picture rather than a curated one.

What a Complete AI Inventory Actually Requires

An inventory that satisfies the EU AI Act's documentation requirements has four properties that distinguish it from a typical IT asset register. It covers all AI systems regardless of how they were adopted — approved, shadow, or embedded in approved tools. It describes what each system does in terms relevant to the Act's classification criteria, not just what the system is called. It reflects current state, updated continuously rather than annually. And for each system, it contains the classification rationale — evidence that the system was assessed against the Act's five risk tiers and assigned correctly.

Building this for the first time requires three parallel work streams: active discovery (scanning for systems not yet documented), classification (working through the EU AI Act risk assessment for each discovered system), and documentation (producing conformity evidence for High-Risk systems). These streams can run simultaneously for different cohorts — High-Risk systems found early in discovery get classified and documented while discovery continues across the rest of the estate.

The step-by-step guide to building a regulator-ready AI system registry covers the full workflow in detail. For organisations that have not yet determined which of their systems are High-Risk, the EU AI Act risk classification guide walks through each tier and the questions that determine where a system sits.

Governance After Discovery — Keeping the Inventory Current

A complete inventory is the starting condition for compliance, not the end state. Once every AI system is visible and classified, the ongoing governance task is ensuring each system operates within the boundaries its classification requires — that High-Risk systems have human oversight procedures running, that Limited Risk chatbots and assistants are disclosing their AI nature at each interaction, and that content policies relevant to each team's regulatory obligations are applied consistently.

Shadow AI is not a problem that gets solved once. New tools appear constantly, employees adopt them before governance frameworks catch up, and AI capabilities embedded in approved SaaS platforms activate without IT review. Continuous discovery — re-scanning codebases and documents regularly, monitoring for new AI library references — is the operational model that keeps an inventory current rather than accurate only at the moment it was built.

The post on per-team content policy governance covers how message-level enforcement applies differently to different teams depending on which regulations apply to their work — a question that only becomes tractable once you know which AI systems each team is actually using.

Frequently Asked Questions

Shadow AI refers to AI tools used within an organisation without IT approval, security review, or compliance assessment. This includes consumer tools like ChatGPT accessed through personal accounts, AI features added to approved SaaS platforms through vendor updates, AI browser extensions, and developer API integrations built without formal procurement. Lenovo's April 2026 research found 70% of enterprise AI now operates outside IT oversight.
The EU AI Act requires a complete inventory of all deployed AI systems, classified by risk tier. These obligations apply regardless of how a system was adopted — there is no exemption for tools employees adopted without IT approval. An AI tool used for CV screening without IT knowledge is still a High-Risk system under Annex III, and the obligation to classify and govern it applies from first use, not from when IT discovers it.
Partially. Network monitoring catches AI tool usage from corporate devices on corporate networks — but misses personal devices, personal accounts accessed from corporate devices, and API calls embedded in application code. Application whitelisting has no visibility into web-based tools or AI features added to installed applications through vendor updates. Most shadow AI lives in categories these methods cannot see. Scanning codebases, document repositories, and configuration files is significantly more effective.
BlackFog's January 2026 survey found that among employees using unsanctioned tools: 33% had shared research data or datasets; 27% had shared employee records including salaries and performance data; 23% had inputted financial statements or sales figures. LayerX found that 22% of paste operations into AI tools contain PII or payment card information, and 40% of file uploads to AI platforms include PII or PCI data (LayerX, 2025).
IBM's 2025 Cost of a Data Breach Report found shadow AI breaches average $4.63 million — $670,000 more than the $3.96 million average for standard breaches. 65% of shadow AI breaches involve compromised customer PII. Under the EU AI Act, an undiscovered High-Risk AI system additionally carries a regulatory fine of up to €15 million or 3% of global annual turnover — a separate exposure on top of breach costs.
Continuous monitoring is the only viable model given the pace at which new AI tools appear and the regularity with which vendors add AI features to existing platforms. At minimum: a full codebase and document scan whenever a new employee joins or a new project is initiated; a scheduled quarterly full-estate scan; and real-time alerting for new AI library dependencies added to version-controlled repositories. Annual or ad-hoc scanning creates compliance gaps the moment a new tool is adopted.

Let'sConnect

Trusted by

WIZCOAutomation AnywhereAppianUiPath
Luke Suneja

Flexible, fast, and focused — let's solve your tech challenges together.

Luke Suneja

Client Partner

What can we help you with?