What is a FRIA?
The Fundamental Rights Impact Assessment (FRIA) is a document required under Art. 27 of Regulation (EU) 2024/1689 (EU AI Act) for deployers of specific high-risk AI categories – including public bodies, public-service providers, and certain Annex III systems.
The FRIA assesses the impact of an AI system on the fundamental rights of affected persons – in particular non-discrimination, data protection, human dignity and procedural rights under the EU Charter of Fundamental Rights.
Who must create a FRIA?
Under Art. 27(1) EU AI Act, the following deployers must carry out a FRIA before putting a high-risk AI system into service:
- Bodies governed by public law – public authorities, administrations, government agencies
- Private deployers providing public services – e.g. in healthcare, education, infrastructure
- Deployers of AI systems under Annex III No. 5(a) – creditworthiness assessment and credit scoring
- Deployers of AI systems under Annex III No. 4 – employment, workforce management and access to self-employment
Mandatory contents of a FRIA per Art. 27
Art. 27(3) EU AI Act defines the minimum contents:
- Description of the deployer's processes – How is the AI system embedded in existing processes?
- Period and frequency of use – How often and for how long is the system used?
- Categories of affected persons – Who is directly or indirectly affected? Special attention to vulnerable groups.
- Specific harm risks – What concrete risks exist for the fundamental rights of those affected?
- Description of human oversight – Who monitors the system? What intervention options exist?
- Risk mitigation measures – Technical and organisational measures for identified risks
Step by step: Creating a FRIA
Step 1: Identify and classify the AI system
Describe your AI system precisely: What purpose does it serve? In which area of application is it used? Which Annex III category does it fall under?
Step 2: Identify affected fundamental rights
Systematically examine which fundamental rights of the EU Charter of Fundamental Rights could be affected:
- Art. 1 – Human dignity: Is the dignity of affected persons preserved?
- Art. 8 – Data protection: Is personal data being processed?
- Art. 21 – Non-discrimination: Is there a risk of discrimination based on gender, ethnicity, age, disability?
- Art. 47 – Right to an effective remedy: Can affected persons contest decisions?
Additionally, consult the relevant EU anti-discrimination directives: Directive 2000/43/EC (Racial Equality Directive), Directive 2000/78/EC (Employment Equality Directive) and Directive 2006/54/EC (Equal Treatment in Employment).
Step 3: Analyse affected groups of persons
Identify all affected groups of persons. Art. 27(3)(c) requires special attention to vulnerable groups:
- Minors and elderly persons
- Persons with disabilities
- Ethnic and religious minorities
- Persons in economically or socially disadvantaged situations
Estimate the number of persons affected annually – this figure is relevant for the risk assessment and the notification to the supervisory authority.
Step 4: Conduct the risk assessment
For each identified fundamental right, assess:
| Dimension | Description |
|---|---|
| Likelihood | How likely is a fundamental rights violation? (Low / Medium / High) |
| Severity | How serious would the impact be? (Low / Medium / Severe) |
| Reversibility | Can the effects be reversed? |
| Scope of affected persons | How many persons are potentially affected? |
Step 5: Define risk mitigation measures
Describe concrete measures for each identified risk:
- Technical measures: bias testing, fairness metrics, data quality checks
- Organisational measures: training, internal policies, complaints procedures
- Human oversight: human-in-the-loop, human-on-the-loop, override mechanisms
Step 6: Establish monitoring processes
The FRIA is not a one-off document – Art. 27(4) requires regular updates when there are material changes to the system or its deployment context.
Step 7: Notification to the supervisory authority
Under Art. 27(5) EU AI Act, deployers must notify the results of the FRIA to the competent market surveillance authority. In Germany, this is primarily BaFin (financial sector) or the Federal Network Agency (general).
Common mistakes in the FRIA
- Too generic – The FRIA must be specifically tailored to your AI system
- Forgetting vulnerable groups – Art. 27 explicitly requires their consideration
- No quantitative estimate – Number of affected persons is missing
- Measures without assignment – Each risk needs concrete countermeasures
- One-time creation – The FRIA must be regularly updated
Create a FRIA automatically with AIvunera
AIvunera generates a customised FRIA draft based on your system details – with article references, risk assessments and flagged assumptions. The draft covers all mandatory contents per Art. 27 and serves as an 80% foundation for legal review.