TL;DR: The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024 and uses a risk-based approach: the higher the risk your AI system poses to health, safety, or fundamental rights, the stricter the obligations. High-risk AI provisions apply from 2 August 2026. Non-compliance penalties reach up to EUR 35 million or 7% of global annual turnover.
What Is the EU AI Act?
The EU AI Act is the European Union's landmark regulation governing the development, deployment, and use of artificial intelligence systems. Formally published as Regulation (EU) 2024/1689, it represents the first binding, horizontal legal framework for AI anywhere in the world.
Unlike voluntary guidelines, it establishes enforceable obligations across all industries. It applies regardless of where the provider is established, as long as the AI system affects individuals within the EU. The regulation entered into force on 1 August 2024, with high-risk provisions taking effect on 2 August 2026.
Why Was the EU AI Act Created?
The EU AI Act was driven by four key objectives:
- Protecting fundamental rights – AI systems increasingly make or influence decisions on loans, hiring, public services, and law enforcement. The regulation ensures these decisions respect human dignity, non-discrimination, and due process.
- Addressing safety concerns – AI deployed in critical infrastructure, medical devices, and transport systems can pose direct risks to health and safety. The Act imposes strict requirements on such systems.
- Creating legal certainty for businesses – Instead of a patchwork of national rules, companies benefit from a single harmonised framework across all 27 EU Member States.
- The “Brussels Effect” – Like the GDPR before it, the EU AI Act is designed to set a global standard. Companies operating internationally will likely adopt its requirements as their baseline.
Who Does the EU AI Act Apply To?
The regulation defines several roles with distinct obligations:
- Providers (Art. 3(3)) – Those who develop or place an AI system on the market. Providers bear the heaviest obligations, including conformity assessment (Art. 43) and technical documentation (Art. 11).
- Deployers (Art. 3(4)) – Those who use an AI system under their own authority. Deployers must conduct a Fundamental Rights Impact Assessment (Art. 27) and ensure human oversight (Art. 14).
- Importers and Distributors – Entities that bring AI systems into the EU market or make them available have verification duties to ensure the provider has met its obligations.
Territorial scope (Art. 2): The regulation applies to non-EU companies if the output of their AI system is used within the EU. This extraterritorial reach mirrors the GDPR's approach.
The Risk-Based Framework
The EU AI Act classifies AI systems into four risk tiers, each with different regulatory requirements:
Unacceptable Risk (Art. 5) – Prohibited
Certain AI practices are banned outright because they pose an unacceptable threat to fundamental rights:
- Social scoring systems by public authorities
- Subliminal manipulation or deceptive techniques
- Exploitation of vulnerable groups (age, disability, social situation)
- Real-time remote biometric identification in publicly accessible spaces (with narrow exceptions for law enforcement)
- Biometric categorisation by sensitive attributes (race, political opinions, sexual orientation)
- Untargeted scraping of facial images from the internet or CCTV
- Emotion recognition in workplaces and educational institutions
These prohibitions have been enforceable since 2 February 2025.
High Risk (Art. 6) – Comprehensive Obligations
High-risk AI systems are subject to the most comprehensive set of requirements. They include AI used in:
- Critical infrastructure (energy, transport, water supply)
- Education and vocational training (admissions, assessments)
- Employment (recruitment, promotion, termination decisions)
- Essential private and public services (credit scoring, insurance, social benefits)
- Law enforcement (risk assessments, predictive policing)
- Migration and border control (visa processing, asylum applications)
- Administration of justice and democratic processes
Limited Risk (Art. 50) – Transparency Only
AI systems that interact with people or generate content must meet transparency obligations:
- Chatbots must disclose that the user is interacting with an AI system
- Deepfakes and AI-generated content must be clearly labelled
- Emotion recognition systems must inform the affected person
Minimal Risk – No Obligations
The majority of AI systems (spam filters, AI-powered video games, inventory management) fall into this category and face no regulatory requirements. Providers may voluntarily adhere to codes of conduct (Art. 95).
High-Risk AI Systems: What You Need
If your AI system qualifies as high-risk, you must comply with a comprehensive set of requirements:
- Fundamental Rights Impact Assessment (FRIA) (Art. 27) – Deployer obligation to assess the impact on affected persons' fundamental rights before deployment.
- Technical Documentation (Art. 11 + Annex IV) – Provider obligation to document the system's design, development, testing, and validation comprehensively.
- Transparency Notice (Art. 13) – Clear instructions for deployers on how the system works, its capabilities, and its limitations.
- Conformity Assessment (Art. 43) – Formal procedure to verify that the system meets all applicable requirements before market placement.
- Risk Management System (Art. 9) – A continuous, iterative process to identify, analyse, and mitigate risks throughout the AI system's lifecycle.
- Data Governance (Art. 10) – Requirements for training, validation, and testing data sets, including measures to detect and address bias.
- Automatic Logging (Art. 12) – The system must automatically record events to ensure traceability of its functioning.
- Human Oversight (Art. 14) – Measures that allow natural persons to effectively oversee the AI system's operation and intervene when necessary.
- Accuracy, Robustness & Cybersecurity (Art. 15) – The system must achieve appropriate levels of accuracy, be resilient to errors, and be protected against security threats.
- EU Database Registration (Art. 49) – High-risk AI systems must be registered in the EU public database before being placed on the market.
Key Deadlines
| Date | Milestone | What Applies |
|---|---|---|
| 1 August 2024 | Entry into force | Regulation published and legally binding |
| 2 February 2025 | Prohibitions apply | Banned AI practices (Art. 5) enforceable |
| 2 August 2025 | GPAI & AI literacy | General-purpose AI rules and AI literacy obligations |
| 2 August 2026 | High-risk obligations | Full high-risk AI requirements enforceable – less than 5 months away |
| 2 August 2027 | Annex I products | AI systems embedded in regulated products (machinery, medical devices, etc.) |
The 2 August 2026 deadline is the most critical for most companies, as it triggers the full range of high-risk obligations including FRIA, technical documentation, and conformity assessments.
Penalties for Non-Compliance
The EU AI Act establishes three penalty tiers under Art. 99:
| Tier | Maximum Fine | % of Turnover | Applies To |
|---|---|---|---|
| Tier 1 | €35 million | 7% | Prohibited AI practices (Art. 5) |
| Tier 2 | €15 million | 3% | High-risk non-compliance (Art. 8–15, 27, 43, 49) |
| Tier 3 | €7.5 million | 1% | Misleading information to authorities |
National competent authorities enforce the regulation in each Member State. At EU level, the European AI Office oversees general-purpose AI models (Art. 64–68).
How to Get Started
Compliance with the EU AI Act is a structured process. Here are the five key steps:
- Classify your AI system – Use our free risk classifier to determine whether your system falls under the high-risk category. This takes less than 2 minutes.
- Identify your role – Are you a provider or deployer? Your obligations differ significantly based on your role in the AI value chain.
- Conduct a gap analysis – Compare your current practices against the requirements of Art. 8–15 and Art. 27. Identify where documentation, processes, or technical measures are missing.
- Prepare your documentation – Generate compliance document drafts including FRIA, Technical Documentation, and Transparency Notices tailored to your specific AI system.
- Establish ongoing compliance – The EU AI Act requires continuous risk management (Art. 9). Set up regular reviews, monitoring, and update procedures.
Frequently Asked Questions
When does the EU AI Act take effect?
The EU AI Act entered into force on 1 August 2024 with a staggered timeline: prohibitions since February 2025, GPAI rules from August 2025, high-risk obligations from 2 August 2026, and Annex I product rules from August 2027.
Does it apply outside the EU?
Yes. Under Art. 2, the regulation has extraterritorial scope. It applies to any provider or deployer, regardless of where they are established, if their AI system's output is used within the EU.
What is a high-risk AI system?
High-risk AI systems are defined in Art. 6 and listed in Annex III. They include AI used in critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice.
What documents do I need?
For high-risk AI systems, you typically need a Fundamental Rights Impact Assessment (FRIA), Technical Documentation, a Transparency Notice, and a conformity assessment. The exact requirements depend on whether you are a provider or deployer.
What happens if I don't comply?
Penalties can reach up to €35 million or 7% of global annual turnover for prohibited practices. For high-risk non-compliance, fines go up to €15 million or 3%. Authorities can also order market withdrawal of non-compliant AI systems.