Skip to content

EU AI Act Glossary

Key terms of the EU AI Act (Regulation (EU) 2024/1689) – from Annex III to transparency notice. With article references.

A

Annex I

List of EU harmonisation legislation (e.g. MDR, Machinery Regulation) whose regulated products fall under the EU AI Act when embedding AI. Deadline: August 2027.

Articles: Art. 6(1)

Annex III

List of 8 high-risk application areas: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.

Articles: Art. 6(2)

Annex IV

Detailed requirements for the technical documentation of high-risk AI systems. Defines 13 mandatory sections.

Articles: Art. 11

AI Office

European AI Office within the European Commission, responsible for the oversight of GPAI models and the coordination of EU AI Act enforcement.

Articles: Art. 64

AI literacy

Obligation to ensure that staff operating or overseeing AI systems possess sufficient AI competence. Applies to all organisations deploying AI.

Articles: Art. 4

AI system

A machine-based system designed to operate with varying levels of autonomy that generates outputs such as predictions, recommendations or decisions influencing physical or virtual environments.

Articles: Art. 3(1)

C

CE marking

Conformity marking confirming that a high-risk AI system meets all EU AI Act requirements. Must be affixed before placing the system on the market.

Articles: Art. 48

Conformity assessment

Procedure to demonstrate that a high-risk AI system meets all requirements of the EU AI Act. May be conducted internally or by a notified body.

Articles: Art. 43

D

Deployer

A natural or legal person that uses an AI system under its own authority – except for personal, non-professional use.

Articles: Art. 3(4), Art. 26

Data governance

Requirements for training, validation and testing datasets: relevance, representativeness, accuracy and completeness. Measures for the detection and correction of bias.

Articles: Art. 10

E

EU database for high-risk AI

Publicly accessible EU database in which providers and deployers must register high-risk AI systems – before placing on the market or putting into service, respectively.

Articles: Art. 49, Art. 71

F

FRIA (Fundamental Rights Impact Assessment)

Assessment of the impact of a high-risk AI system on the fundamental rights of affected persons. Mandatory for certain deployers before putting the system into service.

Articles: Art. 27

G

GPAI (General-Purpose AI)

AI models with general-purpose capabilities (e.g. large language models) that can be used for a variety of tasks. Subject to specific transparency obligations.

Articles: Art. 3(63), Art. 51–56

GPAI with systemic risk

GPAI models with high capability (threshold: 10^25 FLOP) that trigger additional obligations: red teaming, model evaluation, incident reporting.

Articles: Art. 51(2), Art. 55

H

High-risk AI system

An AI system used in one of the areas listed in Annex III that poses significant risks to health, safety or fundamental rights. Subject to comprehensive compliance obligations.

Articles: Art. 6, Annex III

Human oversight

Measures ensuring that an AI system can be effectively overseen by natural persons. Encompasses human-in-the-loop, human-on-the-loop and human-in-command approaches.

Articles: Art. 14

M

Market surveillance

Official monitoring of whether AI systems on the market comply with EU AI Act requirements. Carried out by national market surveillance authorities.

Articles: Art. 74–84

Minimal risk

AI systems with no specific regulation under the EU AI Act. Only voluntary codes of conduct apply. Examples: spam filters, AI in video games.

Articles: Art. 95

P

Provider

A natural or legal person that develops or commissions the development of an AI system and places it on the market or puts it into service under its own name or trademark.

Articles: Art. 3(3), Art. 16

Placing on the market

The first making available of an AI system on the Union market. Triggers the provider's compliance obligations.

Articles: Art. 3(9)

Penalties

Fines for non-compliance: up to EUR 35 million / 7% of turnover (prohibited practices), up to EUR 15 million / 3% of turnover (high-risk obligations), up to EUR 7.5 million / 1.5% of turnover (incorrect information).

Articles: Art. 99

Prohibited AI practices

Absolutely prohibited AI applications: social scoring, subliminal manipulation, exploitation of vulnerabilities, untargeted facial scraping, emotion recognition in the workplace (with exceptions).

Articles: Art. 5

R

Remote biometric identification

Identification of natural persons at a distance through biometric data (e.g. facial recognition). Real-time remote biometric identification in publicly accessible spaces is prohibited as a general rule.

Articles: Art. 5(1)(d), Annex III(1)

Risk management system

Continuous iterative process throughout the entire lifecycle of a high-risk AI system: risk identification, analysis, evaluation and mitigation.

Articles: Art. 9

Regulatory sandbox

Controlled framework established by national authorities to enable the development and testing of innovative AI systems under regulatory supervision.

Articles: Art. 57–63

S

Social scoring

Evaluation of natural persons based on their social behaviour or personal characteristics by public authorities. A prohibited AI practice under Art. 5.

Articles: Art. 5(1)(c)

T

Technical documentation

Comprehensive documentation of a high-risk AI system in accordance with the 13 mandatory sections of Annex IV. Must be prepared before placing on the market and kept up to date.

Articles: Art. 11, Annex IV

Transparency notice

Mandatory information for users and affected persons about the use of an AI system: purpose, functionality, limitations and contact details.

Articles: Art. 13 (Provider), Art. 26 (Deployer), Art. 50 (GPAI)

Transparency obligation (Art. 50)

Obligation for providers of certain AI systems (chatbots, deepfakes, emotion recognition) to inform persons about the interaction with AI.

Articles: Art. 50

V

Vulnerable groups

Particularly vulnerable groups of persons (minors, elderly persons, persons with disabilities) that must be given special consideration in the risk analysis of AI systems.

Articles: Art. 5(1)(b), Art. 27(3)(c)

From Glossary to Compliance

You know the terms – now create the documents. From €149.

Generate drafts
EU AI Act Glossary – Key Terms & Definitions | AIvunera