Age Verification for Humans

Age Verification for Humans

One of the largest experiments in online age verification is now being implemented in Europe, which has prompted me to think quite a bit so far, especially about the implementation details.

The EU approach to age verification
The European Commission is working towards an EU-harmonised approach to age verification.

I've written extensively about the zero-knowledge algorithm to be used in this EUDI wallet prototype, as well as on how the privacy-preserving potential of this great solution is defused by the way it is deployed.

I'm now here in good spirits after leading the workshop on digital identity for a focus retreat held in Berlin, which featured many excellent experts on techno-politics and the ethical concerns surrounding identity.

Cypherpunk Retreat
This retreat brings together the top builders and front-line organizers in digital privacy to align on roadmaps, drive technical coordination, and accelerate progress toward a more private and resilient internet. The goal is not just to discuss, but to ship: to create tangible advancements that can be implemented and adopted rapidly.

There are many takeaways from this meeting, and I will start highlighting one that I believe is the most urgent, emerging from a dialogue with Kyle den Hartog during our panel session titled:

Can browsers get more than passkeys for identity?

Let's first state that wiring Digital ID access for online content (and through web browsers) risks centralising control and reducing user agency: age-based access control could become a proxy for censorship.

Speculations aside, one mistake has already been made by age verification pilots today: confusing content moderation with guardianship.

This insight is well explored by Kyle's article "Decentralizing Age Verification with SSI" and should not be overlooked. Current age verification laws combine two separate issues: content moderation (determining what content is safe/unsafe) and guardianship (ensuring children are protected).

The two problems should be separated!
  • Content moderation → handled locally (on-device or browser-based filtering)
  • Guardianship → enforced by parents and teachers, not centralised authorities

Content Moderation via Client-Side Filtering

This would be similar to Adblock or SafeBrowsing lists and evolve into real-time, on-device classification of unsafe content, while users can self-moderate or subscribe to trusted third-party lists. It would preserve privacy since filtering happens on the device, not the server.

Guardianship via Device/OS-Level Controls:

Schools and parents can enforce filtering policies at the operating system level. Browsers/apps block unsafe content by default, but guardians can override or log access. Guardianship becomes flexible, reflecting individual family or school values. Guardians (parents, teachers) can issue temporary digital credentials to grant access.

For example: a parent approves a child’s request to access a blocked site → the OS/browser verifies the parent’s credentials. This avoids reliance on a few centralised issuers and keeps control distributed.

The elephant in the room

Much of the research necessary to firmly establish DGCNECT's pilot in relation to human values is lacking. In this case, it is more than justifiable to adopt a decentralised approach that empowers parents, teachers and IT admins in the field, rather than relying on centralised governance and ID issuers.

Age verification is not solely about blocking access to gambling or porn websites.

It also enables the development of contextual policies, such as protecting children from suicidal social media trends during specific periods, and addressing other phenomena that impact particular linguistic and geographic communities at particular times. Additionally, we should consider that cyber-attacks targeting our societies might leverage such methods in the future.

Today, the technocratic vision driving all its developments is merely widening the problem of the EUDI project, and clearly overlooks societal and human factors.

The absence of interdisciplinary expertise is apparent when rooms are filled solely with engineers.

And so this pilot will be just another crack in the EUDI project: its technical implementation sporting shiny cryptography and millions of investments to build an "identity wallet to access websites".

I think Europe deserves something better than this; yet, it seems all doors are closed to contribution, and the only way to get involved is to rush the development of a flawed plan.

Who am I, and further reading

If you like to read further, I've written more about this and other EUDI problems

The Seven Sins of European Digital Identity (EUDI)
EUDI as is today presents big problems and disregards criticism, warnings, and requests to review technical details, with results that harm the fairness of the system and the privacy of its participants, also limiting infrastructure security and scalability.

I've been working on privacy and identity technology for over a decade, primarily in projects funded by the European Commission. Among my efforts are decodeproject.eu and reflowproject.eu, various academic papers, including SD-BLS, recently published by IEEE. Additionally, with our team at The Forkbomb Company we've developed products as DIDROOM.com and CREDIMI.io.

Jaromil

Jaromil

Inventor, Ph.D. Dyne.org think &do tank.
Mastodon