Why zero-knowledge tech is not enough but brings us almost there.
In this article, I aim to update my perspective on how privacy is protected within the European Digital Identity system (EUDI).
Privacy is one of seven significant shortcomings I have identified in EUDI.

Recent updates suggest some progress has been made, moving the framework closer to a more privacy-supporting design. While improvements are not insignificant, critical questions remain unresolved. Without further concrete changes, there is a real risk that EUDI will become a new channel for private surveillance, serving corporate interests and relinquishing to them enormous amounts of citizen data previously managed by nation-states. At the same time, states will also have no further access to them.
When we see that online age verification will utilise privacy-preserving technologies like zero-knowledge proofs, we should not assume that every problem has been solved. I will proceed to explain in detail that, without proper sandboxing, the APIs provided by proprietary mobile device operating systems will be able to spy on every credential presentation, making it impossible for anyone else to be aware of it.
I have previously analysed in detail the zero-knowledge proof algorithm that Google has released as open-source, noting its virtues as a core component for privacy-first designs. However, as is often the case with technology, its effectiveness depends on how it is used: no tech component will ever be a silver bullet that provides a solution to a much more complex problem.
Privacy and identity
Privacy stands as a cornerstone in digital identity systems, shielding individuals from undue surveillance and manipulation. Without it, these systems open doors to risks such as data breaches, identity theft, and pervasive tracking, which erode personal autonomy, democratic agency, civil rights, and have repercussions on the psychological well-being of people.
Christopher Allen, in his foundational work on self-sovereign identity, emphasises the importance of privacy for establishing trust in digital interactions. His principles outline how users must control their data without intermediaries, ensuring protection against censorship and unauthorised access. Drawing from this, privacy in digital identity prevents the concentration of power, where a single entity holds the keys to personal lives. If mishandled, digital identity becomes a tool for control rather than empowerment, amplifying inequalities and denying dissent. For democracy, privacy fosters free expression and association, allowing citizens to engage without fear of reprisal.
Digital identity, when implemented correctly, verifies claims without exposing complete profiles, and in the future, it may evolve to support democratic processes such as voting or petitioning. Yet, poor implementation hinders this by enabling mass data collection, turning a liberating technology into an instrument of oversight that undermines collective freedom. We should be cautious here, as even well-intentioned systems like Self Sovereign Identity (SSI) face adoption barriers and potential flaws in practice. Not all claims of privacy enhancement hold up under scrutiny, especially when tied to profit-driven models.
Christopher’s recent account of the Global Digital Collaboration Conference titles “When Technical Standards Meet Geopolitical Reality”

and says a lot about the dangers looming, despite the good faith of the event organisers, which I’m sure will be able to improve on in the second edition, announced for 24-26 June, to take place in Lausanne, which I'm planning to attend again.
Zero knowledge
Zero-knowledge cryptography is a complex form of privacy-preserving technology that enables the safeguarding of digital identity privacy by proving facts without disclosing underlying data or allowing different presentations to be related to each other (referred to as unlinkability). At its core, it allows one party to convince another of a statement's truth while maintaining the secrecy of its own. This cryptography, rooted in proofs of knowledge, ensures verifiers learn nothing beyond the validity of the claim.
In digital identity, zero-knowledge proofs also enable selective disclosure. For instance, consider proving you are over 18 to access a service without needing to show your birthdate or complete ID. The system verifies the age threshold cryptographically, without leaking any extra details.
We developed a zero-knowledge system in 2018 for the DECODE project.

We ran pilots in cooperation with the cities of Amsterdam and Barcelona to minimise data exposure in participatory budgeting petitions and in age-proof verifications for individuals aged 18 and above.
When I knew Google was working on zero-knowledge technology, well before their press release about it, I couldn’t resist getting involved. Today, I’m pleased that the Longfellow-zk system has been released as free and open-source software (MIT/Apache license) and recognise its great value, as well as the ethical concerns of its authors, already well-known developers Matteo Frigo and Abhi Shelat.
Here, I’ve conducted an extensive technical analysis of the system, complete with benchmarks, privacy, and security considerations:

This component alone, however, doesn’t make every system embedding it privacy-preserving per se. It depends on the system it is embedded into and the way it is integrated. Zero-knowledge empowers people, but only if built on open, auditable foundations.
The foundation being used by the Sparkasse pilot in EUDI is not privacy-preserving.
Do one thing and do it well
When a technical component is integrated into a product, framework, or platform, the way it is integrated is crucial to preserve the properties that the component offers. In the case of privacy-preserving technology used to minimise the data being shared, such components must be isolated from the environment in which they run, because their task is to transform privacy-sensitive data into something safe to share: they should be the only ones receiving sensitive data and we should be assured that such data is not shared to any other component before its transformation.
This concept is called “process isolation” and is also commonly referred to as a verb, as “to sandbox a component”. This is also an important security principle, as it helps ensure that bugs and vulnerabilities occurring in one component do not propagate to the entire system. It is an approach that the UNIX philosophy recommends when saying “do one thing and do it well”. When developing software, it is good practice to minimise its operations to the strict necessary and interact with other components through safe channels.
Despite being developed with the best intentions, the Longfellow-zk component will lose its privacy properties when embedded inside the “Google Play” API or Apple's iOS frameworks, which provide no guarantees that the data used to create a zero-knowledge proof will not be shared with other components of the Android or iOS operating systems. This way, Google’s or Apple's frameworks will be the only ones able to process such data transparently, eventually matching it to other information, such as geolocation, time, and any other data already known by the system or provided by attached sensors.
To solve this problem and uphold the virtues of a zero-knowledge system, the API embedding it should provide absolute assurance of process isolation, similar to what we do in Zenroom for Longfellow-zk and any other algorithm we implement: we ensure there is no access to any network or filesystem during the computation taking place between the input and the output, de-facto isolating it into a space for “secure computation”.

Such a technology also exists in browsers, and it was developed with the worst intentions of monetising content playback restricting it by software means: the infamous Encrypted Media Extensions. I argue that this technology, and perhaps more generally, WASM execution engines inside browsers, may be adopted for the ethical purpose of isolating computations taking place on privacy-sensitive data.
Until process isolation is granted for every execution of zero-knowledge algorithms, the shield of privacy-preserving technology will be lifted, providing protection against anyone but the operating system manufacturers, which, in the case of mobile technologies, are an oligopoly of mega-corporations operating outside any state jurisdiction.
Who am I
I am Denis Roio, also known as Jaromil. I hold a PhD in philosophy with a focus on technology, and my background encompasses software development, cybersecurity and counter-espionage. Over the years, I have founded Dyne.org, a nonprofit organisation dedicated to promoting free software and social innovation, where we address issues of data ownership and ethical technology. Working for our spin-off, the Forkbomb Company, we have developed 100% free and open-source products for identity management, such as DIDROOM.com and the identity marketplace for conformance and interoperability, CREDIMI.io.