Ask most compliance leaders how many AI tools their organization is running, and they will give you a number. Ask them how confident they are in that number, and the conversation usually changes.
What most organizations have is a list of the AI tools that IT approved. What they are missing is the 7 to 14 tools that Marketing, Customer Support, HR, and Operations deployed on their own. Tools with free tiers. Tools connected to company data. Tools nobody registered. Tools whose data processing terms nobody read.
That gap between the official count and the real count is shadow AI. It exists in nearly every organization we work with. And it is the single most common root cause of AI governance failures we see.
What Shadow AI Actually Is
Shadow AI is any AI system operating in your organization without formal approval, documentation, or governance oversight.
It is not a new problem. It is the AI version of shadow IT, the decades-old pattern of business teams adopting tools faster than IT and compliance can review them. The difference with AI is the risk surface. When a marketing team used an unapproved spreadsheet tool, the downside was limited. When that same team uses an unapproved AI tool connected to customer data, the downside includes regulatory exposure, data retention violations, and potential breach scenarios.
Shadow AI typically shows up in three forms:
Free-tier tools accessed with personal or work email accounts. AI writing assistants, summarization tools, and research platforms that may be processing company data without enterprise data agreements in place.
Departmental purchases made outside of IT review. Tools bought on a team budget or expensed individually, where no one evaluated the vendor's data handling practices.
AI embedded in tools you already approved. Features added after your initial review, sometimes quietly, sometimes announced in a changelog nobody read.
Why It's a Governance Problem, Not Just a Security Problem
Most organizations frame shadow AI as a security risk. It is. But that framing is too narrow, and it leads to the wrong response.
If shadow AI were only a security problem, the answer would be blocking unauthorized tools. But that does not work at scale, and it does not address the compliance exposure that already exists.
The governance problem is this: you cannot create policies for systems you do not know you have. You cannot conduct a risk assessment on tools that are not in your inventory. You cannot respond to a regulator's question about your AI practices if your answer is based on an incomplete picture.
Under the EU AI Act, your obligations apply to AI systems you deploy, including third-party tools your teams are using. Under emerging US state-level AI regulations and sector-specific guidance from financial and healthcare regulators, the question is increasingly not just what you built, but what you are using and what it is doing with your data.
Shadow AI is not a department's problem. It is an enterprise compliance problem, and it belongs in the same conversation as your broader AI governance program.
Where Shadow AI Tends to Hide
In the organizations we assess, shadow AI tends to concentrate in a few consistent areas.
Marketing and Communications are usually the highest-density areas. Content generation tools, AI writing assistants, and social media optimization platforms are widely adopted, often without IT review.
Customer-facing operations including Customer Success, Support, and Sales frequently use AI for call summaries, email drafting, and customer data analysis. These tools often access CRM data, customer communications, and transaction records.
HR and People Operations use AI for recruiting screening, performance documentation, and employee communications. These use cases frequently involve personal data and in some jurisdictions trigger additional requirements under employment and privacy law.
Finance and Legal use AI for document review, contract analysis, and research. The data involved, including financial records, legal documents, and deal terms, is typically the most sensitive in the organization.
The pattern is consistent: the higher the productivity gain from AI, the faster adoption outpaces governance.
What a Real AI Inventory Includes
An AI inventory is not a list of approved tools. It is a complete map of every AI system operating in your organization, regardless of approval status.
A complete inventory covers:
- Every tool that touches company data, including shadow tools not yet on the approved list
- What data each tool accesses, processes, and retains
- Who owns each tool at the department level and who approved it (or did not)
- What the vendor's data processing terms actually say
- What decisions or workflows each tool is influencing
- Whether any of those decisions affect individuals in ways that could trigger regulatory requirements
That last point matters more than most teams realize. An AI tool that screens resumes, scores leads, or makes recommendations about customer accounts may qualify as high-risk under the EU AI Act regardless of how it was deployed or whether IT approved it.
How to Run an AI Inventory Without Disrupting Operations
The most common objection to AI inventories is that they are disruptive. Done poorly, they are. Done well, they take three to four weeks and require no operational changes.
The approach we use starts with signals, not surveys. Before asking any employee anything, pull data from systems that already exist: SSO logs, expense reports, SaaS management tools, and browser-based security tooling. These sources often reveal 60 to 70 percent of the AI footprint without a single interview.
From there, structured interviews with department leads fill in the gaps. The goal is not to catch anyone using unauthorized tools. It is to get a complete picture so that you can govern what you actually have, not what you approved.
Once the inventory is complete, each tool gets a basic classification: what data it accesses, what it does with that data, and what risk tier it belongs to. This classification drives the remediation roadmap, identifying which tools need vendor assessments, which need policy coverage, and which need to be replaced or sunset.
What to Do with What You Find
The inventory is not the end. It is the foundation.
Once you know what you have, the work splits into three tracks.
The first is remediation. Tools that cannot meet your governance requirements need to be addressed, whether that means enterprise agreements, data handling amendments, or replacement.
The second is policy. Your acceptable use policy, data classification policy, and AI procurement process all need to reflect your actual AI footprint, not an idealized version of it.
The third is ongoing monitoring. Shadow AI is not a one-time problem. New tools appear constantly. The inventory process needs to become a standing practice, not a project.
Organizations that treat the inventory as a project close the gap once. Organizations that treat it as a practice stay ahead of it.
If you are not sure where to start, our AI Governance Maturity Framework is a free tool that walks your team through five governance domains and helps you identify where your program has gaps. Or reach out at hello@revoya.ai if you would like to talk through what a full inventory would look like for your organization.
