Skip to content
AI-Guardian
Enterprise Security8 min read

Shadow AI: How to Detect and Control Unsanctioned AI Tool Usage

65% of enterprise employees use at least one unsanctioned AI tool. Here's how to find out what's happening in your organisation — and how to bring it under control without killing productivity.

AI-Guardian Security Team·

Shadow AI is the generative AI equivalent of Shadow IT — and it's growing faster than most security teams realise. When organisations block ChatGPT at the network layer, employees don't stop using AI. They find a way around it. This guide explains how to detect Shadow AI usage, quantify your exposure, and implement controls that actually work.

What Is Shadow AI?

Shadow AI refers to the use of AI tools — generative AI assistants, AI-powered plugins, AI coding tools — by employees without explicit IT or security approval. It includes:

  • Using personal ChatGPT accounts on company devices
  • Accessing Claude, Gemini, or Perplexity through mobile hotspots to bypass proxies
  • Installing AI browser extensions (Grammarly AI, Monica, Merlin) without IT approval
  • Using AI features embedded in approved tools (Notion AI, Slack AI, Figma AI)
  • Running local AI models to process sensitive documents "safely"

The last item deserves attention: employees who believe they're doing the right thing by using a local model may still be exposing sensitive data if the model was downloaded from an untrusted source, or if they subsequently share the output with unsanctioned cloud services.

How Widespread Is Shadow AI?

The data is striking. Independent research from multiple cybersecurity firms in 2025 converged on similar findings:

  • 65–75% of enterprise employees who use AI tools use at least one tool that is not officially sanctioned by their IT or security department
  • 38% of AI usage in enterprise environments occurs outside corporate-monitored network channels (personal hotspots, home networks, split-tunnelled VPNs)
  • Of employees who know their company has blocked specific AI tools, 52% report actively circumventing the block

The policy-circumvention statistic is the most important one. It demonstrates that network-level blocking is not a solution — it's a friction generator that drives usage underground and makes it harder to audit.

Why Shadow AI Is More Dangerous Than Shadow IT

Traditional Shadow IT — an employee installing Dropbox to share files, or using a personal Trello board for project management — created data governance risks. Shadow AI creates the same risks, but with a critical difference: AI systems consume and process data in ways that file sync tools don't. Every interaction with a generative AI model potentially exposes the input data to:

  • The AI vendor's training data pipeline (unless enterprise opt-out is in place)
  • The AI vendor's security posture (which the organisation has not vetted)
  • Potential regulatory violations if the data includes personal data without a DPA
  • IP exposure if the data includes proprietary algorithms or business logic

A file in an unmanaged Dropbox account is a data governance problem. A source code file processed by an unvetted AI model is an IP exposure, a potential training data leak, and a possible regulatory violation — all at once.

How to Detect Shadow AI in Your Organisation

1. Network traffic analysis (limited but useful)

For traffic that flows through your corporate network or VPN, analyse DNS resolution and outbound HTTPS connections to known AI service domains:

  • api.openai.com, chat.openai.com (ChatGPT / OpenAI API)
  • claude.ai, api.anthropic.com (Anthropic / Claude)
  • generativelanguage.googleapis.com (Gemini API)
  • api.perplexity.ai, copilot.microsoft.com, cursor.sh

Cross-reference this list against your approved AI tool inventory. Any domain with significant traffic that isn't on the approved list is a Shadow AI indicator.

Limitation: This only covers on-network traffic. Employees on mobile data, split-tunnel VPNs, or home networks are invisible here.

2. Endpoint software inventory

Use your MDM (Jamf, Intune, etc.) to audit installed browser extensions and desktop applications across managed devices. Flag any extension or app that communicates with AI service domains. This is more comprehensive than network analysis but still misses browser-based access through standard web interfaces.

3. Endpoint AI DLP with Shadow AI reporting

The most comprehensive approach is deploying an endpoint agent that monitors AI tool usage at the input layer — regardless of network path. AI-Guardian's desktop agent observes which applications are submitting text to AI endpoints and surfaces this in the admin dashboard as a Shadow AI report, categorised by:

  • Tool (ChatGPT, Claude, Gemini, Copilot, etc.)
  • Access method (browser, desktop app, API call)
  • Frequency and volume per user (anonymised)
  • Data categories involved (credentials, PII, source code)

This gives security teams a complete picture of AI usage across the organisation — including tools that were never officially approved — without the visibility gaps that network-only monitoring creates.

Controls That Actually Work

The most important insight from organisations that have successfully reduced Shadow AI risk is counterintuitive: the answer is not blocking. Blocking creates resentment, drives usage underground, and eliminates visibility. The organisations with the lowest Shadow AI risk are those that have made approved AI tools easy to access and have added transparent controls that don't impede productivity.

Tier 1: Make approved tools the path of least resistance

  • Provide employees with enterprise-tier AI licences (ChatGPT Enterprise, Claude for Work, Gemini for Workspace) that include DPA coverage and training data opt-out
  • Pre-configure approved AI tools in the browser and IDE — make them available in one click
  • Publish a clear, short list of approved tools with their use case scope (e.g. "ChatGPT Enterprise: drafting and research. Not for source code or customer data.")

Tier 2: Add a transparent safety layer, not a blocker

  • Deploy AI-Guardian across all managed devices. It protects employees from accidental data leaks without blocking any AI tool or workflow
  • Configure redaction rules to match your organisation's data classification policy
  • Use the audit dashboard to build a baseline of AI usage patterns — you can't manage what you can't measure

Tier 3: Govern, don't just monitor

  • Conduct a quarterly Shadow AI review: compare the current AI usage baseline against approved tool inventory and escalate any high-volume unapproved tools
  • Run annual AI security awareness training — focus on specific, concrete examples of what not to paste (not abstract "be careful with sensitive data" messaging)
  • Create a fast-track process for employees to submit new AI tool requests — if the approval process takes 3 months, employees will route around it

If you're building out a Shadow AI governance programme and want a starting point, request our Shadow AI Risk Assessment template — a free resource we provide to enterprise prospects during onboarding conversations.

Shadow AI: How to Detect and Control Unsanctioned AI Tool Usage | AI-Guardian Blog · AI-Guardian