The EU AI Act entered into force on 1 August 2024 and its obligations are rolling out in phases through 2026 and beyond. For enterprises deploying generative AI tools — whether ChatGPT, Copilot, or internal LLM-powered systems — the compliance window is now. This checklist covers every obligation relevant to companies using third-party AI systems internally.
Who This Applies To
The EU AI Act applies to any organisation that deploys AI systems in the EU, regardless of where the organisation or the AI provider is headquartered. "Deployment" includes using AI tools internally — not just selling AI products to customers. If your employees in Frankfurt use ChatGPT to draft customer emails, your organisation is a "deployer" under the Act.
Key distinction
The Act distinguishes between providers (who develop AI systems) and deployers (who use AI systems in their business). Most enterprises are deployers. Obligations differ — deployers have fewer technical requirements but significant governance and documentation duties.
Risk Classification: Where Do Your AI Use Cases Fall?
The EU AI Act uses a risk-tiered framework. Before completing any compliance checklist, classify each AI use case in your organisation:
Unacceptable Risk — Prohibited
- ·Social scoring systems by public authorities
- ·Real-time biometric surveillance in public spaces (with narrow exceptions)
- ·AI that exploits psychological weaknesses to manipulate behaviour
- ·Predictive policing based solely on profiling
High Risk — Strict Obligations
- ·AI used in hiring and employment decisions
- ·AI used to evaluate creditworthiness
- ·AI in education (exam grading, student assessment)
- ·AI for access to essential services
Limited Risk — Transparency Requirements
- ·Chatbots (must disclose they are AI)
- ·AI-generated content (must be labelled)
- ·Emotion recognition systems
- ·Deep fake generation
Minimal Risk — No Mandatory Obligations
- ·AI spam filters
- ·AI-assisted code completion (Copilot, Cursor)
- ·Internal productivity tools
- ·Most general-purpose LLM usage for drafting / summarisation
Most enterprises using ChatGPT or Copilot for productivity purposes fall into the Minimal Risk tier. However, any use case involving HR decisions, financial assessments, or customer scoring requires a High Risk compliance programme.
The Compliance Checklist for AI Deployers
Governance and Policy
Data Protection and Privacy
Technical Controls
Human Oversight (High-Risk Systems Only)
Timeline: Key Dates for Enterprise Compliance
| Date | Obligation |
|---|---|
| Feb 2025 | Prohibited AI practices banned (Article 5) |
| Aug 2025 | GPAI (General Purpose AI) model obligations apply — transparency, copyright summaries |
| Aug 2026 | High-risk AI system obligations in full effect |
| Aug 2027 | Extended deadline for certain existing high-risk AI systems |
The Technical Control Most Organisations Are Missing
The most commonly overlooked EU AI Act obligation for deployers is Article 26(1): deployers must implement appropriate technical and organisational measures to ensure AI systems are used in accordance with their intended purpose and the Act's requirements.
For generative AI tools, this means actively preventing employees from submitting prohibited categories of data — personal data without a legal basis, biometric data, data from children — to AI APIs. "We told employees not to" is not a technical measure. A browser-level and desktop-level DLP agent that intercepts and redacts prohibited data categories before they reach AI endpoints is the compliant implementation.
AI-Guardian provides exactly this layer, with a compliance dashboard that generates the evidence documentation needed for regulatory audits. If your organisation is working toward EU AI Act readiness, contact us for a compliance gap assessment.