Knowledge

Biometric Surveillance Demands EU-Wide Ban

Legal analysis shows facial recognition in public spaces violates fundamental rights. Licensing system needed for verification uses.

Biometric Technology Brings Mass Surveillance Closer to European Reality

A comprehensive legal analysis published in Policy and Society, co-authored by our CEO Joost Gerritsen alongside researchers from the Rathenau Institute, reveals that biometric applications in public spaces pose such severe risks to fundamental rights that a legal ban is warranted. The article “Better protection against biometrics” provides crucial guidance for privacy professionals, supervisory authorities, and compliance consultants grappling with the rapid deployment of facial recognition, fingerprint scanning, and emerging biometric technologies.

The analysis demonstrates that despite GDPR protections, biometric data collection has expanded far beyond traditional law enforcement into swimming pools, airports, schools, and public squares. This proliferation occurs even as courts and data protection authorities repeatedly rule such applications illegal, highlighting a dangerous gap between technological deployment and regulatory enforcement.

From Fingerprints to Emotional States: The Expanding Scope

The research identifies alarming trends in biometric data collection across public sectors and spaces:

  • Distance capabilities expanding: Commercial applications can recognise faces from 15+ metres; military technology works from over 1 kilometre; infrared systems identify individuals by heartbeat from 150 metres
  • Behavioural tracking intensifying: Beyond physical features, systems now analyse walking patterns, typing behaviour, voice characteristics, and micro-expressions
  • Intimate data extraction: New applications derive health information from voices, detect mental states from facial expressions, and diagnose conditions from fingerprint irregularities
  • Function creep accelerating: Data collected for one purpose gets repurposed—security systems become marketing tools, access control becomes behaviour monitoring

Each expansion raises distinct GDPR compliance challenges, particularly regarding special category data processing, purpose limitation, and the fundamental requirement for freely given consent.

Verification vs. Identification: Critical Legal Distinctions

The article establishes crucial distinctions between biometric applications that current EU digital law struggles to address adequately:

Verification (1-to-1 comparison): Checking whether someone is who they claim to be. While potentially justified for high-security contexts like nuclear facilities or sensitive government databases, even verification raises concerns about exclusion, instrumentalisation, and privacy violations.

Identification (1-to-N comparison): Comparing individuals against entire databases to identify or exclude them. This application, particularly in public spaces, enables mass surveillance, destroys anonymity, and creates chilling effects on freedom of movement and assembly.

The Dutch GDPR Implementation Act permits biometric data use when “necessary for verification or security purposes”—language that dangerously opens doors to facial recognition for surveillance, despite European Data Protection Board statements that such use never meets proportionality requirements.

Current Legal Framework Failures

The analysis exposes systematic failures in legal protection:

  • Disproportionate applications proliferate: Swimming pools use facial recognition to identify “troublemakers”; municipalities deploy biometric access control where simple alternatives suffice
  • Consent becomes meaningless: Citizens cannot refuse biometric processing when accessing essential government services or public spaces
  • Enforcement remains sporadic: Despite courts in Sweden, Poland, and France ruling against school biometric systems, similar applications continue elsewhere
  • Sensitive data escapes protection: Emotion recognition and health diagnostics from biometric sources don’t automatically qualify as “special category data” under GDPR

The article cites multiple enforcement actions: employers reprimanded for mandatory fingerprint scans, schools fined for attendance tracking via facial recognition, and the UK tax authority forced to destroy millions of illegally collected voice prints. Yet deployment continues to outpace enforcement.

Technology Perfection Amplifies Rather Than Reduces Risks

Crucially, the analysis argues that even perfect biometric systems pose unacceptable risks:

Loss of anonymity: Perfect recognition eliminates the possibility of moving through public spaces without surveillance, fundamentally altering the relationship between citizens and state.

Chilling effects intensify: Knowledge of pervasive identification restricts freedom of movement, assembly, and expression—as demonstrated in Russia and Hong Kong where facial recognition identifies protesters.

Discrimination shifts forms: While imperfect systems discriminate through higher error rates for certain groups, perfect systems enable targeted discrimination—those accepting biometric scanning receive preferential treatment at airports and stadiums.

Power imbalances solidify: Governments and corporations gain unprecedented knowledge about individuals who cannot meaningfully resist or correct errors in automated systems.

The ClearView AI Warning

The article highlights ClearView AI as exemplifying function creep dangers. This company scraped three billion photos from social media to create a facial recognition database, enabling anyone to identify strangers for stalking, manipulation, or tracking. Data collected in one context (social sharing) gets weaponised for entirely different purposes, illustrating why strong legal protections are essential.

Two-Pronged Legal Solution Proposed

The authors propose specific legal reforms to address biometric threats:

  1. Ban on identification in public spaces: Prohibit biometric identification (1-to-N matching) in public spaces, whether by government or private entities. This aligns with positions from the Council of Europe and European Data Protection Board.
  2. Licensing requirement for verification: Require prior licensing from the Data Protection Authority for biometric verification applications. This would allow authorities to assess necessity, attach conditions, and maintain oversight of deployments.

The proposal would eliminate consent as a legal basis for biometric processing, recognising that meaningful consent is impossible when dealing with such intimate, unchangeable characteristics in contexts of structural power imbalance.

Emerging Threats: Beyond Traditional Biometrics

The article warns about “new generation” applications that escape current legal frameworks:

  • Health diagnostics: Voice analysis detecting COVID-19; fingerprint patterns indicating cancer; YouTube videos diagnosing autism
  • Emotion recognition: Despite meta-studies showing unreliable connections between expressions and emotions, deployment continues
  • Invisible monitoring: Micro-blushes revealing heart rate; walking patterns indicating mood; typing behaviour exposing stress levels

These applications process intimate data yet may not qualify as “biometric data” or “health information” under GDPR’s narrow definitions, leaving them largely unregulated despite severe privacy implications.

International Context and Municipal Action

The analysis notes growing international momentum for restrictions. San Francisco, Boston, and Portland have banned facial recognition. Belgium expresses criticism. Yet deployment accelerates, with Poland requiring quarantined individuals to submit selfies for facial recognition verification, and Serbia installing Chinese surveillance cameras despite EU candidacy.

Research shows only 6% of Dutch respondents willing to provide facial data to private parties for identification; just 24% would accept government identification. This public scepticism contrasts sharply with rapid deployment, highlighting democratic deficits in biometric adoption.

Navigate the Complex Biometric Regulatory Landscape with Digibeetle

As biometric technologies proliferate beyond regulatory frameworks’ ability to control them, understanding how supervisory authorities interpret and enforce existing protections becomes crucial. The gap between technological capability and legal protection creates unprecedented challenges for privacy professionals and compliance consultants.

At Digibeetle, our expert-curated platform helps you track the rapidly evolving biometric regulatory landscape. Search our cross-referenced database for specific technologies like facial recognition, emotion detection, or behavioural biometrics to instantly access court rulings, supervisory decisions, and enforcement actions across Europe. Our daily updates ensure you stay ahead of developments in how authorities apply GDPR, interpret necessity and proportionality, and address emerging biometric applications.

Whether you’re a supervisory authority developing biometric enforcement strategies, a law firm advising on compliant deployments, or a business evaluating biometric systems, Digibeetle provides essential intelligence on this critical privacy battleground. Start your 30-day free trial to explore comprehensive biometric jurisprudence and regulatory guidance, or book a consultation to discuss navigating biometric compliance in your specific context.

icon_smile

Try Digibeetle with your team for free

Start your discovery of data protection documents with Digibeetle.