Podcasts

Deepfakes: Europe’s Next Legal Frontier

CEO Joost Gerritsen reveals why deepfakes threaten democracy & how the AI Act falls short. Expert legal analysis inside.

In this compelling interview on Legal4Tech podcast, our CEO Joost Gerritsen joins hosts Rosalia Pellegrino and Giacomo Bossa for an in-depth exploration of one of technology’s most pressing challenges: deepfakes and their regulation under European law. The conversation reveals critical gaps in our legal framework that every privacy professional, supervisory authority, and legal consultant needs to understand.

The Evolution from Forums to AI: A Legal Perspective

Gerritsen’s journey from computer enthusiast to European data protection expert mirrors the evolution of digital law itself. Starting with a transformative course on “Internet and the Law” at Utrecht University, he discovered how technology and legal frameworks intersect – a realization that shapes how we approach AI Act compliance today.

His advice for data protection professionals facing questions beyond their immediate expertise? “Just say yes, then buy yourself some time.” This approach becomes increasingly vital as EU digital law expands beyond GDPR into territories like the Data Act, NIS2 Directive, and the AI Act. The message is clear: siloed thinking no longer works when regulatory complexity demands cross-domain expertise.

Deepfakes: From Science Fiction to Legal Reality

The discussion takes a sobering turn when examining deepfakes’ real-world impact. Gerritsen shares a chilling example from an English divorce case where AI-generated voice recordings were submitted as false evidence – and this was already happening years ago. Today’s technology has become so sophisticated that even public figures struggle to distinguish real from fake.

Consider the implications:

  • Voice cloning requires just 30 seconds of audio
  • Fraudsters already use deepfakes to impersonate family members requesting money
  • Political deepfakes can influence elections and undermine democratic processes
  • Courts face challenges verifying evidence authenticity

As Gerritsen notes, drawing from his work with the Rathenau Institute for the European Parliament report on deepfakes: “This technology can have such a negative impact” that it represents one of the greatest threats to democracy.

The AI Act’s Disappointing Response

Despite recognizing deepfakes as a fundamental threat, the AI Act’s approach appears surprisingly weak. The regulation merely requires labeling of deepfakes – a response Gerritsen finds “disappointing” given the stakes involved.

The hosts raise an intriguing question: Could unlabeled deepfakes fall under Article 5’s prohibited practices as manipulative or deceptive AI systems? The analysis reveals the complexity of applying these provisions, with multiple conditions that must be met including “materially distorting behavior” and causing “significant harm.”

While political deepfakes might qualify as prohibited practices – especially when they influence voting decisions – the high threshold and numerous conditions create uncertainty. This gap between the threat’s magnitude and regulatory response highlights a critical challenge in AI governance.

The Ethics Gap in AI Regulation

One of the episode’s most thought-provoking segments addresses how the AI Act handles ethical considerations. Rather than providing clear guidance on fundamental rights protection, the legislation delegates crucial decisions to standardization bodies. Over 120 experts are developing harmonized standards that will determine how organizations implement safeguards against discrimination and bias.

This approach creates significant challenges for businesses navigating compliance:

  • Legal uncertainty while awaiting standards publication
  • Risk of investing in ISO norms that may differ from eventual EU standards
  • Difficulty balancing multiple fundamental rights without clear frameworks
  • The challenge of making ethical decisions without regulatory guidance

As Gerritsen observes, telling organizations to “just do your best and see what happens” regarding fundamental rights compliance creates an untenable situation for those trying to develop responsible AI systems.

160 Authorities: The Coordination Challenge

Perhaps the most striking revelation involves the sheer number of authorities involved in AI Act enforcement. Gerritsen’s research has identified over 160 fundamental rights authorities across Europe that will play roles in supervising high-risk AI systems. From data protection authorities to election committees, from non-discrimination bodies to defense organizations, the regulatory landscape becomes extraordinarily complex.

Austria alone has appointed 20-30 authorities, while other countries have designated just one or two. This fragmentation raises critical questions about regulatory consistency and how organizations can navigate potentially conflicting interpretations across jurisdictions.

Education as the First Line of Defense

When asked about solutions, Gerritsen emphasizes education over prohibition. Drawing parallels with Finland’s successful approach to combating fake news through critical thinking education, he argues that teaching people to question and verify information offers more promise than blanket bans on technology.

This educational imperative extends to legal professionals themselves. The conversation highlights how software engineers designing credit assessment algorithms must now understand GDPR requirements for explainability – a perfect example of why AI literacy has become essential across disciplines.

Looking Ahead: Neurotechnology and Beyond

The episode concludes with a glimpse into the future. Just as deepfakes seemed like science fiction years ago, neurotechnology represents the next frontier in digital law. These emerging challenges underscore why maintaining current knowledge of EU digital regulation has become critical for legal professionals.

Master the Complexity of EU Digital Law

This interview on Legal4Tech illustrates the mounting challenges facing privacy professionals and legal teams. With deepfakes threatening democratic foundations, the AI Act creating new compliance obligations, and 160+ authorities interpreting regulations, the need for comprehensive legal intelligence has never been greater.

Digibeetle transforms this complexity into clarity. Our expert-curated platform doesn’t just track regulatory developments – we reveal the connections between court decisions, authority interpretations, and emerging standards. While others struggle with information overload, our hand-picked, cross-referenced legal database delivers the insights that matter for your practice.

Whether you’re grappling with AI Act implementation, monitoring authority decisions, or advising on deepfake policies, we provide the daily-updated intelligence you need. Start your 30-day free trial to experience how comprehensive legal research should work, or book a consultation to discuss your organization’s specific challenges in navigating the EU digital rulebook.

icon_smile

Try Digibeetle with your team for free

Start your discovery of data protection documents with Digibeetle.