Knowledge

Your Roadmap to Fundamental Rights Compliance

CEO Joost Gerritsen reveals deployer obligations for high-risk AI & fundamental rights. Essential guidance for AI literacy & compliance.

For privacy professionals and legal consultants navigating the expanding landscape of EU digital law, understanding how the AI Act protects fundamental rights has become essential. Our CEO Joost Gerritsen’s comprehensive webinar “AI Act – Safeguarding Fundamental Rights” provides the practical roadmap deployers need to ensure compliance while safeguarding human rights. This webinar represents an excellent starting point for your AI literacy journey, offering concrete guidance on what the AI Act requires from organisations deploying high-risk AI systems.

Understanding Fundamental Rights in the AI Context

The intersection of AI and fundamental rights extends far beyond data protection. Drawing from both the European Convention on Human Rights and the EU Charter of Fundamental Rights, the AI Act recognises multiple rights at risk from AI systems:

  • Privacy and family life – From surveillance to facial recognition technology
  • Freedom of thought – Increasingly relevant with emerging neurotechnologies
  • Freedom of expression – Impacted by content moderation algorithms
  • Non-discrimination – Perhaps the most challenged right, as AI systems inherently categorise and differentiate

As Gerritsen explains, most fundamental rights are relative, requiring careful balancing against other rights. However, freedom of thought stands as an absolute right – when infringed, no balancing test applies. This distinction becomes crucial when businesses navigating compliance must assess their AI systems’ impact.

High-Risk AI: When Fundamental Rights Protection Becomes Mandatory

Not every AI system triggers the Act’s fundamental rights obligations. The determination follows a structured analysis that every deployer must understand:

First, confirm you’re dealing with an AI system as defined by the Act. Then, check whether your system either serves as a safety component of products listed in Annex I or falls within Annex III’s predefined areas – biometrics, critical infrastructure, education, workplace management, essential services access, law enforcement, migration control, or judicial administration.

Even systems listed in Annex III might escape high-risk classification through exceptions in Article 6(3), unless they perform profiling. This nuanced approach means supervisory authorities and deployers must carefully analyse each system’s specific implementation.

Provider Obligations: What Deployers Should Expect

The AI Act establishes clear expectations for high-risk AI providers that directly benefit deployers seeking AI Act compliance. Providers must examine training, validation, and testing datasets for biases that could affect health, safety, or fundamental rights. Article 10 specifically requires identification of biases likely to lead to prohibited discrimination.

Transparency obligations under Article 13 prove particularly valuable for deployers. Providers must deliver comprehensive instructions for use that include:

  • System capabilities and limitations
  • Known or foreseeable circumstances that may lead to fundamental rights risks
  • Appropriate risk mitigation measures
  • Training requirements for deployer personnel

The risk management system required by Article 9 creates a continuous improvement cycle. Providers must identify fundamental rights risks and implement mitigation measures, including training for deployers where necessary. Importantly, the Act acknowledges residual risks may remain but requires them to be reduced to acceptable levels.

Deployer Responsibilities: Beyond Simple Implementation

Article 26 establishes comprehensive obligations for organisations deploying high-risk AI systems. These extend well beyond following instructions for use to encompass active monitoring, reporting, and oversight responsibilities.

Monitoring and reporting obligations require deployers to watch for serious incidents – the AI Act’s equivalent of GDPR data breaches. When an incident directly or indirectly leads to fundamental rights infringements, deployers must inform providers and relevant market surveillance authorities without undue delay.

Human oversight represents a shared responsibility between providers and deployers. Deployers must assign oversight to individuals with necessary authority, competence, and support. This goes beyond appointing someone – it requires ensuring they can effectively intervene, override, or stop the AI system when needed.

For government institutions and certain private sector use cases, Article 27 mandates fundamental rights impact assessments. These assessments must describe:

  • Processes where the AI system will be used
  • Categories of affected persons and groups
  • Specific risks and potential harms
  • Human oversight implementation
  • Measures for risk materialisation

Addressing Bias: Technical and Human Challenges

The webinar provides crucial insights into two distinct bias types that legal professionals must understand. Technical bias occurs when AI systems show systematic differences in treating certain groups. The classic example involves CV screening systems that inadvertently disadvantage candidates with maternity leave gaps.

Automation bias presents a different challenge – the human tendency to over-rely on AI outputs. Article 14 explicitly requires oversight measures to counter this tendency, particularly for systems providing decision recommendations. Equally problematic is algorithmic aversion, where humans reject accurate AI predictions in favour of less reliable human judgment.

The Act provides limited tools for bias mitigation, including special authorisation to process sensitive data for bias detection under Article 10(5). However, recent European Parliament analysis suggests this provision may be too narrowly worded to cover all necessary bias mitigation activities, potentially creating GDPR compliance challenges.

The Fundamental Rights Authority Landscape

Article 77 introduces fundamental rights authorities – a new layer of oversight specifically for high-risk AI systems. Our research identifies over 160 such authorities across Europe, ranging from data protection authorities to anti-discrimination bodies, patient rights organisations, and labour inspectorates.

This fragmented landscape creates challenges for businesses operating across borders. Different member states have designated different authorities, with some countries appointing over 20 while others designate just a few. These authorities possess investigative powers and must be notified of serious incidents and systems presenting risks, though they cannot directly impose AI Act fines.

Practical Takeaways for Immediate Action

With high-risk AI provisions taking effect in August 2026, deployers have limited time to prepare. Gerritsen’s key recommendations provide a practical action plan:

  1. Articulate specific safeguards – Move beyond vague commitments to “non-discrimination” by defining concrete bias thresholds and mitigation strategies
  2. Verify provider alignment – Ensure shared understanding of fundamental rights requirements through detailed contractual provisions
  3. Request existing assessments – Providers may have already conducted fundamental rights impact assessments you can leverage
  4. Define incident procedures – Establish clear protocols for who does what when serious incidents or risks emerge
  5. Prepare oversight structures – Identify and empower individuals who will exercise human oversight with appropriate authority and resources

Resources for Compliance Preparation

The webinar highlights valuable tools for organisations preparing for AI Act implementation. The Public Buyers Community has developed draft contractual clauses for engaging with high-risk AI providers. The Dutch Ministry of Infrastructure provides an AI Impact Assessment template covering fundamental rights considerations.

For fundamental rights impact assessments, Utrecht University’s Data School offers a practical, battle-tested template that predates the AI Act but has been refined to align with its requirements. These resources transform abstract obligations into actionable compliance steps.

Master AI Act Compliance with Expert Guidance

The complexity revealed in this analysis – from distinguishing bias types to navigating 160+ fundamental rights authorities – demonstrates why AI literacy has become essential for legal professionals. Watch the complete webinar on Safeguarding Fundamental Rights to gain the detailed understanding needed for effective AI Act compliance.

Digibeetle provides the comprehensive European legal intelligence platform you need to navigate this evolving landscape. Our expert-curated resources don’t just track regulatory developments – we reveal how AI Act requirements intersect with GDPR obligations, court interpretations, and supervisory guidance. From daily-updated case law to cross-referenced authority decisions, we transform information overload into actionable compliance insights.

Whether you’re conducting fundamental rights assessments, negotiating with AI providers, or building oversight structures, we provide the expert knowledge required by the AI Act. Start your 30-day free trial to access our complete knowledge platform, or book a consultation to discuss your organisation’s specific AI compliance challenges.

icon_smile

Try Digibeetle with your team for free

Start your discovery of data protection documents with Digibeetle.