Generative AI Amplifies Digital Society Risks Beyond Current Regulatory Capacity
A comprehensive analysis by the Rathenau Institute, co-authored by our CEO Joost Gerritsen, reveals that generative AI systems like ChatGPT and Bard are introducing unprecedented challenges that current and proposed EU digital law frameworks may be unable to address. The “Generative AI” report, commissioned by the Dutch Ministry of the Interior and Kingdom Relations, provides crucial insights for privacy professionals, supervisory authorities, and legal consultants navigating this rapidly evolving technological landscape.
Since ChatGPT‘s launch in November 2022, millions worldwide have embraced generative AI, yet the technology’s impact extends far beyond productivity gains. The report identifies systemic risks to fundamental rights, democratic processes, and human autonomy that require urgent attention from data protection professionals and policymakers alike.
What Makes Generative AI Different
Unlike traditional “narrow AI” systems designed for specific tasks, generative AI presents unique regulatory challenges due to three distinctive features:
- Advanced language capabilities: GAI systems demonstrate unprecedented proficiency in understanding and generating human language
- Multi-modal functionality: These systems work seamlessly across different formats—images, sound, video, speech, and even protein structures
- General-purpose training: A single system can perform countless tasks, from writing code to creating art to solving complex scientific problems
The report identifies four primary roles GAI systems currently fulfil: learning tools for information retrieval, production tools for content creation, complex problem solvers in scientific research, and experience creators serving as companions or imitating deceased loved ones. This versatility makes traditional regulatory approaches, designed for single-purpose technologies, increasingly inadequate.
Three Categories of Risk to Public Values
The research reveals that generative AI poses risks across three critical dimensions, each with significant implications for GDPR compliance and broader data protection frameworks:
1. Safety Risks
GAI systems violate user privacy, express biases, and provide incorrect information. Most concerning, these systems are so complex that even developers cannot fully understand their functioning, making risk prevention extremely challenging. This opacity directly conflicts with GDPR requirements for transparency and explainability in automated decision-making.
2. Human-Centric Concerns
The report questions GAI’s impact on cognitive, social, and cultural development. As people increasingly interact with chatbots for emotional support, creative tasks, and learning, fundamental questions arise about human autonomy and dignity—core values protected under European fundamental rights frameworks.
3. Fairness and Justice Issues
The concentration of GAI development among few tech companies raises critical questions about who benefits from these systems and who bears the burden. Creative professionals face job displacement, workers experience changing conditions, and environmental impacts mount—all while regulatory frameworks struggle to keep pace.
Democracy Under Threat
A common thread throughout the report is GAI’s impact on democratic processes. The technology can hamper public debate, influence political decision-making, and concentrate power among tech companies, undermining democratic control over digital technology. This echoes concerns from the online tracking report about political microtargeting but with exponentially greater sophistication and scale.
The report warns that GAI systems can generate convincing disinformation at unprecedented scale, manipulate public opinion through personalised content, and create echo chambers that fragment democratic discourse. For supervisory authorities tasked with protecting democratic values, these challenges require entirely new enforcement approaches.
The AI Act’s Limitations
While much hope rests on the upcoming AI Act, the report questions whether abstract standards for observing human rights will translate effectively into practice. Critical questions remain unanswered:
- When is discrimination risk reduced to an “acceptable” level, and who determines acceptability?
- How can transparency requirements apply to systems even developers don’t fully understand?
- Can traditional impact assessments capture the societal effects of general-purpose AI?
- How will resource-constrained supervisory authorities enforce regulations against global tech giants?
The report suggests that current and proposed policies may prove insufficient to address GAI’s impact on non-discrimination, security, disinformation, competition, and worker exploitation. This regulatory gap creates urgent challenges for compliance consultants and law firms advising clients on AI implementation.
Five Strategic Options for Policymakers
The Rathenau Institute presents five policy options for improving society’s control over generative AI:
- Create capacity to remove harmful GAI applications: Develop mechanisms for swift market intervention when systems pose unacceptable risks
- Ensure future-proof legal frameworks: Update regulations to address GAI’s unique characteristics and evolving capabilities
- Invest in international AI policy: Coordinate global efforts to steer innovation processes of technology companies
- Draft ambitious agenda for socially responsible GAI: Establish clear principles and standards for ethical AI development
- Encourage public debate on GAI desirability: Foster democratic discussion about which AI applications society wants to permit
Implications for Legal and Compliance Professionals
For professionals working with EU digital regulations, the report highlights several critical considerations. First, traditional compliance frameworks designed for deterministic systems struggle with GAI’s probabilistic nature. Second, the speed of GAI development outpaces regulatory adaptation, creating constant compliance uncertainty. Third, the concentration of GAI capabilities among few providers creates new dependencies and vulnerabilities for organisations across all sectors.
The report emphasises that GAI risks are taken seriously worldwide, and every Dutch citizen and authority should do the same. For privacy professionals and data protection experts, this means preparing for a fundamentally different regulatory landscape where traditional approaches to consent, transparency, and accountability may no longer suffice.
Stay Ahead of AI Regulatory Developments with Digibeetle
As this report demonstrates, generative AI is reshaping the regulatory landscape faster than frameworks can adapt. Technologies like ChatGPT, Bard, and emerging AI systems create new compliance challenges daily, with supervisory authorities scrambling to interpret how existing regulations apply to capabilities that didn’t exist when laws were drafted.
At Digibeetle, our expert-curated platform helps you navigate this rapidly evolving intersection of AI technology and regulation. Search our cross-referenced database for specific AI technologies, enforcement actions, and supervisory guidance to understand how authorities across Europe are approaching GAI challenges. Our daily updates track everything from EDPB opinions on AI processing to national authority decisions on automated decision-making.
Whether you’re a supervisory authority developing AI enforcement strategies, a law firm advising on AI Act compliance, or a business implementing generative AI systems, Digibeetle provides the regulatory intelligence you need. We track how different authorities interpret AI transparency requirements, document emerging enforcement patterns, and identify regulatory trends before they become obligations.
Ready to master the complex regulatory landscape of generative AI? Start your 30-day free trial to access comprehensive AI regulatory intelligence, or book a consultation to discuss how we can support your organisation’s AI compliance journey.