Knowledge Media & Press

CPDP 2025: AI Act Enforcement Reality Check

Key insights from Europe's premier privacy conference. Learn how AI Act implementation challenges and fundamental rights assessments shape compliance.

CPDP 2025: Where AI Act Theory Meets Enforcement Reality

Joost Gerritsen, CEO of Digibeetle and affiliate researcher at Utrecht University, attended the 2025 Computers, Privacy & Data Protection conference – Europe’s premier gathering for privacy professionals, supervisory authorities, and legal researchers. His participation in key panels revealed critical insights about AI Act enforcement and fundamental rights assessments.

As both a legal technology entrepreneur and academic researcher, Joost brings a unique perspective to understanding how theoretical frameworks translate into practical tools for EU digital law compliance. Here are his key takeaways from two pivotal sessions.

AI Act in Action: Beyond Innovation Rhetoric

The Data School‘s panel on AI Act compliance, oversight, and enforcement revealed significant tensions in how Europe approaches AI regulation. Key insights from the distinguished speakers include:

Angela Müller (AlgorithmWatch CH) emphasised that effective oversight requires:

  • Focus on automation impacts regardless of technical terminology
  • Strategic storytelling to drive public awareness
  • Strong collaboration between journalism, civil society, and researchers to reveal AI issues

Arnika Zinke (European Parliament) highlighted concerning political dynamics:

  • Politicians frequently frame AI solely as innovation without questioning safety implications
  • Policy discussions have shifted away from fundamental societal questions
  • The EU is building AI Giga Factories for European generative AI without debating whether such systems are socially desirable
  • Current anti-NGO movements threaten evidence-based policymaking

Rob Heyman (Vrije Universiteit Brussel) warned about institutional challenges:

  • “Co-ethics washing” – organisations delegate AI ethics to DPOs without structural changes
  • AI engineers lack incentives for critical questioning
  • The AI Act inadvertently pushes researchers outside moral thinking frameworks
  • Pressure for researchers to publish freely so companies can “valorise” their work

Katja Mayer (University Vienna) addressed the research exemption paradox:

  • The science exemption in the AI Act creates contradictions since AI is inherently scientific
  • Social science perspectives remain largely missing despite inventing key concepts (statistics, bias, behavioural profiling)
  • Open science structures offer models for transparency through trusted research environments
  • Framing safety versus regulation as opposites creates a false dichotomy

Mirko Tobias Schäfer observed that organisations are developing policies above the AI Act’s minimum standards, indicating that compliance requirements are driving more comprehensive governance frameworks than legally mandated.

FRAIA: Four Years of Practical Fundamental Rights Assessment

The Data School‘s workshop on their Fundamental Rights Impact Assessment (FRAIA) tool demonstrated what practical AI governance looks like. After four years of development and testing, FRAIA stands out as one of the few – possibly the only – operational assessment tool for deploying new technologies including AI and algorithmic systems.

This tool represents exactly what supervisory authorities and compliance professionals need: a tested, practical framework that moves beyond theoretical discussions to actionable assessment criteria. The workshop highlighted how organisations can conduct meaningful fundamental rights assessments that satisfy both AI Act requirements and broader ethical obligations.

Implications for Legal Research and Practice

These CPDP insights reveal several critical challenges for privacy professionals, legal researchers, and supervisory authorities:

  1. The enforcement gap: While the AI Act provides a baseline, practical implementation requires tools like FRAIA and comprehensive organisational policies beyond minimum compliance
  2. Research access barriers: Academic researchers lack sufficient access to data and systems needed for meaningful AI governance research
  3. Ethics washing risks: Organisations may delegate ethical responsibilities without structural changes, creating compliance facades rather than genuine protection
  4. Innovation versus safety: The false dichotomy between technological advancement and regulatory safeguards continues to dominate political discourse

Bridging Theory and Practice in AI Governance

The CPDP 2025 discussions highlight a crucial need: connecting high-level regulatory frameworks with practical implementation tools. For legal researchers like those at universities, this means accessing not just legislation but also:

  • Supervisory authority interpretations and enforcement decisions
  • Practical assessment tools and methodologies
  • Cross-jurisdictional approaches to fundamental rights protection
  • Emerging case law on AI governance and algorithmic accountability

Empowering Academic Research with Comprehensive Legal Intelligence

As an affiliate researcher at Utrecht University, Joost Gerritsen understands firsthand the challenges academics face when conducting European legal research. The insights from CPDP 2025 underscore how rapidly the regulatory landscape evolves and how crucial it is to track not just legislation but also enforcement perspectives, practical tools, and emerging interpretations.

This is why we built Digibeetle specifically with researchers in mind. Our expert-curated legal database provides academics with comprehensive access to:

  • Cross-referenced AI Act enforcement decisions and guidance
  • Fundamental rights assessment methodologies from supervisory authorities
  • Youth rights cases and digital safety precedents
  • Daily updates on how concepts like “AI ethics washing” and “safety by design” are interpreted in practice

Our platform transforms the scattered insights from conferences like CPDP into searchable, interconnected knowledge that supports rigorous academic research. Whether you’re analysing AI governance frameworks, researching fundamental rights protection, or exploring youth perspectives in digital policy, Digibeetle ensures you have access to the complete regulatory picture.

Ready to enhance your research with comprehensive European legal intelligence? Start your 30-day free trial and discover why leading academic institutions trust Digibeetle for their EU digital law research. Or book a consultation to learn how we can support your specific research needs with tailored access to cross-referenced legal databases that save months of research time.

icon_smile

Try Digibeetle with your team for free

Start your discovery of data protection documents with Digibeetle.