Skip to content
AI-Guardian
Compliance10 min read

EU AI Act Compliance Checklist: What Enterprises Must Do in 2026

A complete, actionable checklist for enterprise deployers of generative AI. Covers risk classification, governance obligations, technical controls, and key compliance deadlines.

AI-Guardian Security Team·

The EU AI Act entered into force on 1 August 2024 and its obligations are rolling out in phases through 2026 and beyond. For enterprises deploying generative AI tools — whether ChatGPT, Copilot, or internal LLM-powered systems — the compliance window is now. This checklist covers every obligation relevant to companies using third-party AI systems internally.

Who This Applies To

The EU AI Act applies to any organisation that deploys AI systems in the EU, regardless of where the organisation or the AI provider is headquartered. "Deployment" includes using AI tools internally — not just selling AI products to customers. If your employees in Frankfurt use ChatGPT to draft customer emails, your organisation is a "deployer" under the Act.

Key distinction

The Act distinguishes between providers (who develop AI systems) and deployers (who use AI systems in their business). Most enterprises are deployers. Obligations differ — deployers have fewer technical requirements but significant governance and documentation duties.

Risk Classification: Where Do Your AI Use Cases Fall?

The EU AI Act uses a risk-tiered framework. Before completing any compliance checklist, classify each AI use case in your organisation:

Unacceptable Risk — Prohibited

  • ·Social scoring systems by public authorities
  • ·Real-time biometric surveillance in public spaces (with narrow exceptions)
  • ·AI that exploits psychological weaknesses to manipulate behaviour
  • ·Predictive policing based solely on profiling

High Risk — Strict Obligations

  • ·AI used in hiring and employment decisions
  • ·AI used to evaluate creditworthiness
  • ·AI in education (exam grading, student assessment)
  • ·AI for access to essential services

Limited Risk — Transparency Requirements

  • ·Chatbots (must disclose they are AI)
  • ·AI-generated content (must be labelled)
  • ·Emotion recognition systems
  • ·Deep fake generation

Minimal Risk — No Mandatory Obligations

  • ·AI spam filters
  • ·AI-assisted code completion (Copilot, Cursor)
  • ·Internal productivity tools
  • ·Most general-purpose LLM usage for drafting / summarisation

Most enterprises using ChatGPT or Copilot for productivity purposes fall into the Minimal Risk tier. However, any use case involving HR decisions, financial assessments, or customer scoring requires a High Risk compliance programme.

The Compliance Checklist for AI Deployers

Governance and Policy

Appoint an AI compliance owner (CISO, DPO, or dedicated AI governance role)
Create an AI system inventory — document every AI tool used internally
Classify each AI use case by risk tier and document the classification rationale
Establish an AI usage policy covering approved tools, prohibited use cases, and data handling rules
Define a process for evaluating new AI tools before adoption

Data Protection and Privacy

Ensure a valid GDPR legal basis exists for all personal data processed by AI systems
Execute Data Processing Agreements (DPAs) with every AI vendor processing personal data
Verify AI vendors' data retention policies and configure enterprise training data opt-out
Conduct Data Protection Impact Assessments (DPIAs) for high-risk AI use cases
Implement technical controls to prevent personal data from being pasted into AI prompts (DLP)

Technical Controls

Deploy endpoint DLP to prevent credentials and PII from reaching AI APIs
Implement audit logging for AI tool usage (what was submitted, by whom, when)
Configure Shadow AI monitoring to detect unsanctioned AI tool usage
Establish a credential rotation procedure for any secrets exposed to AI systems
Block or restrict access to AI tools that lack enterprise data processing agreements

Human Oversight (High-Risk Systems Only)

Ensure humans can override AI decisions affecting individuals
Document the human review process for AI-assisted decisions
Train staff who operate high-risk AI systems on their obligations
Maintain logs sufficient to reconstruct AI decisions for audit purposes

Timeline: Key Dates for Enterprise Compliance

DateObligation
Feb 2025Prohibited AI practices banned (Article 5)
Aug 2025GPAI (General Purpose AI) model obligations apply — transparency, copyright summaries
Aug 2026High-risk AI system obligations in full effect
Aug 2027Extended deadline for certain existing high-risk AI systems

The Technical Control Most Organisations Are Missing

The most commonly overlooked EU AI Act obligation for deployers is Article 26(1): deployers must implement appropriate technical and organisational measures to ensure AI systems are used in accordance with their intended purpose and the Act's requirements.

For generative AI tools, this means actively preventing employees from submitting prohibited categories of data — personal data without a legal basis, biometric data, data from children — to AI APIs. "We told employees not to" is not a technical measure. A browser-level and desktop-level DLP agent that intercepts and redacts prohibited data categories before they reach AI endpoints is the compliant implementation.

AI-Guardian provides exactly this layer, with a compliance dashboard that generates the evidence documentation needed for regulatory audits. If your organisation is working toward EU AI Act readiness, contact us for a compliance gap assessment.

EU AI Act Compliance Checklist: What Enterprises Must Do in 2026 | AI-Guardian Blog · AI-Guardian