• 6D At-Risk Analysis
At Risk · Shadow IT · Unauthorized SaaS & AI

The Shadow Stack: 65% of SaaS Apps Are Unsanctioned and AI Just Added a New Layer

65% of enterprise SaaS applications are used without IT approval. 69% of organisations have evidence employees use prohibited AI tools. GenAI traffic surged 890% in 2024. Shadow IT accounts for 30–40% of enterprise IT spending. 90% of security leaders themselves use unapproved AI tools. Only 37% of organisations have governance policies. Every unauthorized tool is an unmonitored data exit point. The AI shadow layer is the newest, fastest-growing, and hardest-to-detect threat to enterprise data control.

65%
SaaS Unsanctioned
69%
Use Prohibited AI
890%
GenAI Traffic Surge
37%
Have AI Governance
6/6
Dimensions Hit
2,234
FETCH Score
01

The Insight

Shadow IT has been a known risk for over a decade. What changed is the AI layer. Traditional shadow IT — employees installing unauthorized software or SaaS tools — required downloads, accounts, and configurations that left traces. Shadow AI requires nothing more than opening a browser tab. An employee pastes a confidential document into ChatGPT, uploads a spreadsheet of customer data to Claude, or feeds source code into an unauthorized Copilot instance. To the employee, it feels like using a search engine. To the organisation, it is an unmonitored data exfiltration event.[1][2]

The scale is measurable. Gartner surveyed 302 cybersecurity leaders and found that 69% have evidence or suspect employees are using prohibited public generative AI. GenAI traffic surged more than 890% in 2024 (Menlo Security). CrowdStrike’s 2026 Global Threat Report found adversaries exploited generative AI tools at 90+ organisations, with ChatGPT mentioned 550% more frequently in criminal forums. 98% of organisations report some form of unsanctioned AI use.[2][1]

The Paradox

Employees adopt shadow tools because official tools do not meet their needs. The tools work. They save time. They solve real problems. Blocking them without providing alternatives makes the problem worse.

vs

The Risk

Every unauthorized AI interaction is an unmonitored data pathway. GDPR, SOC2, HIPAA compliance assumes IT controls data flow. Shadow AI breaks that assumption at the speed of a paste command.

The most alarming statistic is not about employees. It is about security professionals. UpGuard research found 90% of security leaders themselves use unapproved AI tools at work, with 69% of CISOs incorporating them into daily workflows. Next DLP found 73% of cybersecurity professionals have used unsanctioned applications including AI in the past year. The people responsible for preventing shadow IT are its most prolific practitioners.[3][4]

90%
Security Leaders Use Unapproved AI
The guardians are the practitioners. 90% of security leaders use unapproved AI tools. 69% of CISOs incorporate them into daily workflows. When the people responsible for governance are the ones circumventing it, the compliance framework is not just incomplete — it is structurally compromised.
02

The Two Shadow Layers

Traditional Shadow IT

65%

65% of all SaaS apps are unsanctioned. Shadow IT accounts for 30–40% of enterprise IT spending (Gartner). Average company wastes $135,000 annually on unnecessary SaaS tools. Nearly 1 in 2 cyberattacks stem from shadow IT, costing $4.2M+ to remediate.[5]

Shadow AI Layer

890%

GenAI traffic surged 890% in 2024. 69% of organisations have evidence of prohibited AI use. Shadow AI requires only a browser — no installation, no account in some cases. 233 documented AI-related incidents in 2024 involving governance failures.[1][6]

Governance Gap

37%

Only 37% of organisations have policies to manage or detect shadow AI (IBM 2025). 63% are operating without guardrails. Only 23% require staff training on approved AI usage. The governance gap is wider for AI than it ever was for traditional shadow IT.[1]

Data Exfiltration

75%

59% of employees use unapproved AI apps (Cybernews). 75% of those shared potentially sensitive information. Tesla employees shared proprietary manufacturing data with ChatGPT. Samsung banned ChatGPT after engineers leaked source code. Free-tier services retain prompts as training data.[4]

Breach Cost

$4.35M

Average data breach cost: $4.35 million (IBM). 60% of breaches lead to increased prices passed to consumers. 1 in 10 cybersecurity professionals admit shadow AI or SaaS use led to a data breach at their organisation.[5][4]

Gartner 2030 Prediction

40%

By 2030, more than 40% of enterprises will experience security or compliance incidents linked to unauthorized shadow AI. By 2027, 75% of employees will acquire, modify, or create technology outside IT’s visibility. The shadow is growing, not shrinking.[2]

When network engineers use unauthorized coding assistants, they risk exposing infrastructure configurations, implementing vulnerable automation scripts with hard-coded credentials, or leaking network architecture that maps your entire environment.

— Andrius Girėnas, security researcher[4]
03

The 6D At-Risk Cascade

The cascade originates from two dimensions simultaneously: Operational (D6) and Regulatory (D4). Unauthorized tools create unmonitored data pathways (D6) while breaking compliance assumptions (D4). This dual origin flows through Quality (D5, tool inconsistency), Revenue (D3, redundant costs and breach risk), Employee (D2, friction between productivity and policy), and Customer (D1, data exposure risk).

DimensionScoreAt-Risk Evidence
Operational (D6)Origin — 656565% of SaaS unsanctioned. Every tool is an unmonitored pathway. Shadow IT proliferates because IT cannot keep pace: only 12% of IT departments can keep up with new technology requests. Each unauthorized application creates a data exit point that GDPR, SOC2, and HIPAA compliance cannot account for. Shadow AI compounds this: browser-based, invisible, and retaining data as training input.[1][5]
Unmonitored Pathways
Regulatory (D4)Origin — 6262Compliance assumes IT controls data flow. Shadow AI breaks that assumption. GDPR requires data processing visibility. HIPAA requires BAAs for tools handling PHI. SOC2 assumes controlled environments. Shadow AI bypasses all of these assumptions at the speed of a paste command. By 2028, 65% of governments will enforce data sovereignty rules restricting cross-border AI use. The regulatory net is tightening around a shadow that is growing.[2][6]
Compliance Breach
Quality (D5)L1 — 5555Different teams using different tools for the same task. No standardisation, no quality baseline, no audit trail. AI-generated outputs from different models produce inconsistent results. Shadow AI creates organisational knowledge that lives outside enterprise systems and cannot be searched, audited, or governed.[1]
Tool Inconsistency
Revenue (D3)L1 — 5252Shadow IT accounts for 30–40% of IT spending. Average company wastes $135K/year on unnecessary SaaS. Breach cost averages $4.35M. The cost is dual: redundant licence spending on tools IT doesn’t know about, plus catastrophic breach risk from data flowing through channels IT doesn’t monitor.[5]
Dual Cost Exposure
Employee (D2)L2 — 4848The paradox: employees adopt shadow tools because official alternatives fail them. Blocking without providing alternatives increases frustration and drives usage underground. Healthcare proof: one system that provided approved AI tools saw 89% reduction in unauthorized use and 32 minutes daily time savings per clinician. The fix is substitution, not prohibition.[1]
Productivity vs Policy
Customer (D1)L2 — 4545Customer data flowing through unauthorized channels. Samsung source code leaked to ChatGPT. Tesla manufacturing data shared with AI tools. Healthcare clinicians processing PHI without BAAs. The customer does not know their data transited through an unauthorized AI service — until the breach notification arrives.[4]
Data Exposure
6/6
Dimensions Hit
10×–15×
Multiplier (Extreme)
2,234
FETCH Score

FETCH Score Breakdown

Chirp (avg cascade score across 6D): (65 + 62 + 55 + 52 + 48 + 45) / 6 = 54.5
|DRIFT| (methodology - performance): |85 - 35| = 50 — Default DRIFT. Shadow IT governance methodology is well-established: SaaS management platforms, DLP policies, CASB solutions, approved tool catalogues, employee training. Performance is poor: 37% have governance, 63% are flying blind, and even security leaders circumvent their own policies.
Confidence: 0.82 — Gartner (302 cybersecurity leaders, 2025), CrowdStrike (2026 Global Threat Report), Menlo Security (GenAI traffic data), IBM (governance survey), UpGuard (security leader behaviour), Zluri/Zylo (SaaS management data). Multiple independent institutional sources.
FETCH = 54.5 × 50 × 0.82 = 2,234  ->  EXECUTE — HIGH PRIORITY (threshold: 1,000)
OriginD6 Operational+D4 Regulatory
L1D5 Quality+D3 Revenue
L2D2 Employee+D1 Customer
CAL SourceCascade Analysis Language — shadow IT/AI at-risk
-- The Shadow Stack: Shadow IT/AI At-Risk
-- Sense -> Analyze -> Measure -> Decide -> Act

FORAGE shadow_stack_risk
WHERE unsanctioned_saas_pct > 60
  AND prohibited_ai_usage_pct > 65
  AND genai_traffic_growth > 500
  AND governance_policy_pct < 40
  AND security_leaders_using_shadow_ai > 85
ACROSS D6, D4, D5, D3, D2, D1
DEPTH 3
SURFACE shadow_stack

DIVE INTO ai_shadow_layer
WHEN browser_based_ai_access = true  -- no install, no trace
  AND data_retention_by_ai_provider = true  -- free tiers retain prompts
  AND compliance_assumption_broken = true  -- GDPR/HIPAA assumes controlled flow
TRACE shadow_stack  -- D6+D4 -> D5+D3 -> D2+D1
EMIT unauthorized_data_pathway_cascade

DRIFT shadow_stack
METHODOLOGY 85  -- SaaS management, DLP, CASB, approved catalogues — all exist
PERFORMANCE 35  -- 37% have governance, 90% of security leaders circumvent it

FETCH shadow_stack
THRESHOLD 1000
ON EXECUTE CHIRP critical "6/6 dimensions, AI adds invisible data exit layer, guardians are practitioners"

SURFACE analysis AS json
SENSEOrigin: D6 + D4 (Operational + Regulatory). 65% of SaaS unsanctioned. 69% of organisations have evidence of prohibited AI use. GenAI traffic up 890%. Shadow IT = 30–40% of IT spend. 90% of security leaders themselves use unapproved AI. Only 37% have governance policies. The AI shadow layer requires nothing more than a browser tab — invisible to DLP, invisible to CASB, invisible to IT.
ANALYZED6+D4→D5: tool inconsistency, no quality baseline across shadow tools. D6+D4→D3: 30–40% shadow IT spend plus $4.35M average breach cost. D5+D3→D2: employee frustration when productive tools are blocked without alternatives. D2→D1: customer data flowing through unmonitored channels. Cross-references: UC-142 (Stack Tax), UC-141 (Compliance Cliff), UC-083 (Toxic Flow), UC-201 (Zero Trust Paradox).
MEASUREDRIFT = 50 (default). Governance methodology exists: SaaS discovery platforms, DLP policies, CASB solutions, approved tool catalogues, employee training programmes. Performance is poor because the governance targets the wrong layer — traditional shadow IT controls do not detect browser-based AI interactions that look like normal web usage.
DECIDEFETCH = 2,234 → EXECUTE — HIGH PRIORITY (threshold: 1,000)
ACTCascade alert — shadow IT/AI at-risk. The insight is that AI created a new shadow layer that is qualitatively different from traditional shadow IT. It requires no installation. It leaves minimal traces. It retains data on the provider’s servers. And the people best positioned to prevent it — security leaders — are its most active users. The fix is not prohibition (which drives usage underground) but substitution: provide approved tools that actually work. Healthcare evidence: 89% reduction in unauthorized use when approved alternatives were offered.
04

Key Insights

The Guardians Are the Practitioners

90% of security leaders use unapproved AI tools. 69% of CISOs incorporate them into daily workflows. 73% of cybersecurity professionals have used unsanctioned applications. When the people responsible for security governance are the most prolific shadow AI users, the problem is not employee disobedience — it is that the approved toolset fails to meet professional needs. The governance framework is structurally compromised by the behaviour of its own enforcers.

AI Shadow IT Is Qualitatively Different

Traditional shadow IT required installation, configuration, and accounts — leaving traces. Shadow AI requires a browser tab. Data leaves the organisation at the speed of a paste command. Free-tier AI services retain prompts as training data, converting proprietary information into third-party datasets. The attack surface is not an application boundary — it is the browser itself. DLP and CASB tools designed for application-level monitoring cannot detect conversational AI interactions that look like normal web traffic.

Prohibition Drives Usage Underground

Samsung banned ChatGPT after source code was leaked. The ban did not eliminate AI use — it eliminated visibility into AI use. The healthcare proof point is instructive: providing approved AI tools reduced unauthorized use by 89% while saving 32 minutes per clinician per day. The structural fix is substitution, not suppression. Organisations that provide tools meeting employee needs regain both productivity and governance.

The Enterprise Version of UC-141

UC-141 (Compliance Cliff) mapped how compliance burden cascades at SMB scale. UC-204 maps the same D4→D6 cascade at enterprise scale, with a new dimension: the AI shadow layer that SMBs and enterprises share. The pattern is identical. The scale is different. The AI layer makes both worse because it is invisible to the controls that each has built.

Sources

Tier 1 — Analyst Research
[1]
Vectra AI — Shadow AI Explained: Risks, Costs, and Enterprise Governance. GenAI traffic surged 890% in 2024. 98% of organisations report unsanctioned AI use. 37% have governance policies. Healthcare: 89% reduction with approved tools. CrowdStrike: ChatGPT 550% more frequent in criminal forums.
vectra.ai
February 2026
[2]
Gartner — Identifies Critical GenAI Blind Spots. 302 cybersecurity leaders surveyed (Mar–May 2025). 69% suspect or have evidence of prohibited AI use. Predicts 40% of enterprises hit by shadow AI incidents by 2030. 50% will face delayed AI upgrades from unmanaged GenAI technical debt by 2030.
gartner.com
November 2025
[3]
Fortra — Shadow AI Security Breaches Will Hit 40% of Companies by 2030. UpGuard: 90% of security leaders use unapproved AI. 69% of CISOs incorporate shadow AI into daily workflows. Pasting documents into AI chatbots is no safer than uploading to social media.
fortra.com
November 2025
Tier 2 — Industry Data
[4]
SDxCentral — Keeping Shadow AI From the Enterprise End Zone. Next DLP: 73% of cybersecurity professionals used unsanctioned apps. 1 in 10 admit shadow AI led to breach. Cybernews: 59% use unapproved AI apps, 75% shared sensitive information. Technical staff with privileged access create organisation-wide risk.
sdxcentral.com
November 2025
[5]
Zluri — Shadow IT Statistics: Key Facts 2025. 65% of SaaS apps unsanctioned. Shadow IT = 30–40% of IT spend (Gartner), up to 50% (Everest Group). Average breach cost $4.35M. 57% of SMBs experiencing high-impact shadow IT. 85% have a team using it.
zluri.com
2025
[6]
Knostic — Detect and Control: Shadow AI in the Enterprise. HAI AI Index: 233 AI-related incidents in 2024. ISACA: less than one-third have comprehensive governance. Only 23% require AI usage training. Shadow AI creates knowledge that lives outside enterprise systems.
knostic.ai
January 2026
[7]
IT Pro / Microsoft — 71% of UK Workers Use Shadow AI. 22% used unauthorized tools for risky finance-related tasks. Gartner: by 2027, 75% of employees will use technology outside IT oversight.
itpro.com
November 2025

The headline is the trigger. The cascade is the story.

One conversation. We’ll tell you if the six-dimensional view adds something new — or confirm your current tools have it covered.