Vitalik Buterin: What do I think of biometric proof of personality?

Author: Vitalik

 

Special thanks to the Worldcoin team, the Proof of Humanity community, and Andrew Miller for discussions.

 

One of the trickier, but potentially most valuable, gadgets that people in the Ethereum community have been trying to build is a decentralized proof-of-personhood solution. Proof-of-personhood, aka the “unique human problem,” is a limited form of real-world identity that asserts that a given registered account is controlled by a real person (and a different real person than other registered accounts), ideally without revealing which real person it is.

 

There have been several efforts to solve this problem: Proof of Humanity, BrightID, Idena, and Circles are examples. Some of these come with their own applications (usually UBI tokens), and some are used in Gitcoin Passport to verify which accounts are valid for quadratic voting. Zero-knowledge technologies like Sismo add privacy to many of these solutions. Recently, we’ve seen the rise of a larger and more ambitious proof-of-personhood project: WorldCoin.

 

WorldCoin was co-founded by Sam Altman, best known for his role as CEO of OpenAI. The idea behind the project is simple: AI will create massive and abundant wealth for humanity, but it will also likely take away a lot of people’s jobs, and it will be nearly impossible to tell who is human and who is a robot. So we need to fill this gap by (i) creating a really good proof-of-personhood system so that humans can prove they are actually human, and (ii) providing UBI for everyone. WorldCoin is unique in that it relies on highly sophisticated biometrics, using a specialized piece of hardware called an “Orb” to scan each user’s iris:

 

 

The goal of WorldCoin is to produce a large number of these spheres, distribute them around the world, and place them in public places so that anyone can easily get an ID. To its credit, WorldCoin is also committed to decentralization over time. First, this means technical decentralization: using the Optimism stack to become L2 on Ethereum, and using ZK-SNARK and other cryptographic techniques to protect users' privacy. Then, it includes decentralized governance of the system itself.

 

WorldCoin has been criticized for privacy and security issues with Orb, design problems with its “coin,” and the ethics of some of the choices the company has made. Some of the criticism has been very specific, focusing on decisions the project has made that could easily have been made another way — and indeed, decisions that the WorldCoin project itself might be willing to change. Others, however, have raised more fundamental concerns about whether biometrics — not just WorldCoin’s eye-scanning biometrics, but also the simpler face-video upload-and-verification games used in Proof of Humanity and Idena — are a good idea. Still others have criticized Proof of Personhood in general. Risks include inevitable privacy breaches, further erosion of people’s ability to browse the internet anonymously, coercion from authoritarian governments, and the impossibility of being secure while being decentralized.

 

 

This post will discuss these questions and go through some arguments to help you decide whether it’s a good idea to bow before our new spherical overlords and have your eyes (or face, or voice, or…) scanned, and whether the natural alternatives — using social graph-based proof of personhood or abandoning proof of personhood altogether — are better.

What is a Certificate of Character and Why is it Important?

The simplest way to define a proof of identity system is to create a list of public keys where the system guarantees that each key is controlled by a unique person. In other words, if you are a human, you can put one key in the list, but you can't put two keys in the list, and if you are a robot, you can't put any keys in the list.

 

Proof of personhood is valuable because it solves anti-spam and anti-centralization issues that many people face, avoiding reliance on central authorities and revealing as little information as possible. If proof of personhood is not solved, decentralized governance (including "micro-governance" such as voting on social media posts) will become more vulnerable to capture by very wealthy actors (including hostile governments). Many services can only resist denial of service attacks by setting a price for access, and sometimes the price is too high for many low-income legitimate users to deter attackers.

Many major applications in the world today deal with this problem by using government-backed identity systems (such as credit cards and passports). This solves the problem, but it makes a huge and potentially unacceptable sacrifice in terms of privacy and can be vulnerable to attack at the slightest cost by the government itself.

In many proof-of-personhood projects — not just WorldCoin, but Proof-of-Humanity, Circles, etc. — the “flagship app” is the built-in “N tokens per person” (sometimes called “UBI tokens”). Every user who registers in the system receives some fixed number of tokens per day (or hour or week). But there are many other applications for:

● Airdrop of token distribution

● Token or NFT sales with more favorable terms for less wealthy users

● Vote in the DAO

● A method for “seeding” a graph-based reputation system

● Quadratic voting (and funding and attention payments)

● Prevent bot/sybil attacks in social media

● CAPTCHA alternatives to prevent DoS attacks

In many of these cases, the common thread is the desire to create open and democratic mechanisms that avoid centralized control by project operators and domination by the wealthiest users. The latter is especially important in decentralized governance. In many of these cases, today’s existing solutions rely on (i) highly opaque AI algorithms that leave a lot of room to imperceptibly discriminate against users that the operators simply don’t like, and (ii) centralized ID, aka “KYC”. An effective proof-of-identity solution would be a better alternative that can achieve the security properties required by these applications without running into the pitfalls of existing centralized approaches.

What were some early attempts at proof of personhood?

There are two main forms of proof of personhood: social graph-based and biometric. Social graph-based proof of personhood relies on some form of guarantee: if Alice, Bob, Charlie, and David are all verified humans, and they all say Emily is a verified human, then Emily is probably also a verified human. The credentials are often reinforced with incentives: if Alice says Emily is human, but it turns out she is not, then both Alice and Emily can be penalized. Biometric proof of personhood involves verifying some physical or behavioral trait of Emily that distinguishes humans from robots (and from one another). Most projects use a combination of these two techniques.

The four systems I mentioned at the beginning of the article work roughly like this:

Proof of Humanity: You upload your video and provide a deposit. To get approved, existing users need to vouch for you, and a period of time passes before they can challenge you. If there is a challenge, the Kleros decentralized court will decide if your video is authentic; if not, you lose your deposit and the challenger gets a reward.

BrightID: You join a video call "verification party" with other users, and everyone verifies each other. A higher level of verification is available through Bitu, where you can get verified if enough other Bitu-verified users vouch for you.

Idena: You play a captcha game at specific points in time (to prevent people from playing multiple times); part of the captcha game involves creating and verifying captchas, which are then used to verify other captchas.

Circles: Existing Circles users vouch for you. Circles is unique in that it does not attempt to create a “globally verifiable ID”; instead, it creates a trust graph where someone’s trustworthiness can only be verified from the perspective of your own position in that graph.

How does WorldCoin work?

Each WorldCoin user installs an app on their phone that generates private and public keys, just like an Ethereum wallet. They then visit the “Orb” in person. The user gazes into the Orb’s camera while showing the Orb a QR code generated by their Worldcoin app, which contains their public key. The Orb scans the user’s eyes and uses sophisticated hardware scanning and machine learning classifiers to verify:

1. The user is a real person

2. The user’s iris does not match the iris of any other user who has used the system before

If both scans pass, Orb signs a message approving a specialized hash of the user’s iris scan. The hash is uploaded to a database — currently a centralized server, which is intended to be replaced with a decentralized on-chain system once it’s determined that the hashing mechanism works. The system doesn’t store full iris scans; it only stores hashes, which are used to check for uniqueness. From that point on, the user has a “World ID .”

World ID holders are able to prove they are the only person by generating a ZK-SNARK proving they hold the private key that corresponds to the public key in the database, without revealing which key they hold. So even if someone rescans your iris, they won’t be able to see any actions you take.

What are the main issues with WorldCoin construction?

Four main risks immediately come to mind:

● Privacy: A registry of iris scans could potentially reveal information. At a minimum, if someone else scans your irises, they could check against the database to determine if you have a World ID. Iris scans could potentially reveal more information.

● Accessibility: Unless there are enough spheres that are easily accessible to anyone in the world, World ID will not be reliably accessible.

● Centralization: The Orb is a hardware device and we have no way of verifying that it is constructed correctly and has no backdoors. Therefore, even if the software layer is perfect and fully decentralized, the WorldCoin Foundation still has the ability to insert backdoors into the system, allowing it to create as many fake human identities as it wants.

● Security: Users’ phones could be hacked, users could be forced to scan their irises while presenting a public key belonging to someone else, and it would be possible to 3D print a “dummy” that could be iris-scanned and given a World ID.

It’s important to distinguish between (i) problems that are unique to the choices WorldCoin makes, (ii) problems that are unavoidable with any biometric proof of personhood, and (iii) problems that are common with any proof of personhood in general. For example, signing a “proof of humanity” means publishing your face on the internet. Joining a BrightID verifier doesn’t do that completely, but still exposes your identity to a lot of people. Joining Circles exposes your social graph publicly. WorldCoin is significantly better at protecting privacy than either of these. On the other hand, WorldCoin relies on specialized hardware, which creates the challenge of trusting that the sphere manufacturer built the sphere correctly — a challenge that has no analogue in Proof of Humanity, BrightID, or Circles. It’s even conceivable that in the future someone other than WorldCoin will create a different specialized hardware solution with different tradeoffs.

How do biometric identity verification solutions address privacy issues?

The most obvious and biggest potential privacy breach of any identity proof system is tying every action a person takes to a real-world identity. The scale of this data breach is so large, arguably unacceptably large, that it’s easy to fix using zero-knowledge proofs. Instead of directly signing with a private key in a database with their corresponding public key, a user can make a ZK-SNARK to prove that they own a private key whose corresponding public key is somewhere in the database, without revealing which specific key they own. This is typically done using tools like Sismo, and WorldCoin has its own built-in implementation. Providing “crypto-native” proofs of personhood is really important here: they actually care about taking this fundamental step to provide anonymization, which is something that essentially all centralized identity solutions don’t do.

A more subtle but still important privacy leak is the existence of a public registry of biometric scans. In the case of Proof of Humanity, this is a lot of data: you get a video of every Proof of Humanity participant, which makes it very clear to anyone in the world who is willing to investigate who all the Proof of Humanity participants are. In the case of WorldCoin, the leak is much more limited: Orb locally calculates and publishes only a “hash” of each person’s iris scan. This hash is not a regular hash like SHA256; instead, it’s a specialized algorithm based on machine learning’s Gabor filters that handles the inaccuracies inherent in any biometric scan and ensures that consecutive hashes of the same person’s iris have similar outputs.

Blue: The percentage of digits that differ between two scans of the same person's iris. Orange: The percentage of digits that differ between two scans of two different people's irises.

 

These iris hashes only leak a small amount of data. If an adversary can brute-force (or covertly) scan your irises, then they can compute your iris hash themselves and check it against a database of iris hashes to determine if you are participating in the system. This ability to check if someone is registered is necessary for the system itself to prevent people from registering multiple times, but it always has the potential to be abused. Additionally, iris hashes have the potential to leak a certain amount of medical data (gender, race, and perhaps medical conditions), but this leakage is far less than what is captured by almost any other mass data collection system in use today (e.g., even street cameras). Overall, the privacy benefits of storing iris hashes seem sufficient to me.

If others disagree with this judgment and decide to design a system with more privacy, there are two ways to do it:

1. If the iris hash algorithm can be improved to make the difference between two scans of the same person lower (e.g. reliably below 10% bit flips), the system can store a smaller number of iris hash error correction bits instead of storing the full iris hash (see: fuzzy extractor). If the difference between two scans is below 10%, the number of bits that need to be published will be at least 5 times less.

2. If we want to go a step further, we can store the iris hash database in a multi-party computation (MPC) system that can only be accessed by Orbs (with rate limits), making the data completely inaccessible, but at the cost of the protocol complexity and social complexity of managing the set of MPC participants. The benefit of this is that even if a user wants to, they cannot prove the connection between two different World IDs they have at different times.

Unfortunately, these techniques are not suitable for Proof of Humanity, which requires that the full video of every participant be made public so that if there are signs that it is a fake (including AI-generated fakes), it can be challenged and, in such cases, investigated in more detail.

Overall, despite the "dystopian feel" of staring at a sphere and having it scan your eyeballs deeply, specialized hardware systems do seem to do a pretty good job of protecting privacy. However, the flip side is that specialized hardware systems bring with them greater centralization issues. So we cypherpunks seem to be stuck in a dilemma: we have to weigh one deeply held cypherpunk value against another.

What are the accessibility issues in biometric identity systems?

Specialized hardware creates accessibility issues because it is less accessible. Currently, 51% to 64% of sub-Saharan Africans own a smartphone, and this is expected to increase to 87% by 2030. However, while there are billions of smartphones, there are only a few hundred Orbs. Even with more distributed manufacturing, it is difficult to achieve a world where there is an Orb within five kilometers of every person.

It’s also worth noting that many other forms of personhood proof have even worse accessibility issues. It’s very difficult to join a personhood proof system based on a social graph unless you already know someone in the social graph. This makes it easy for such systems to be limited to a single community in a single country.

Even centralized identity systems have learned this lesson: India’s Aadhaar ID system is based on biometrics because it was the only way to quickly onboard its large population while avoiding massive fraud caused by duplicates and fake accounts (thus saving huge costs), though of course the Aadhaar system as a whole is much weaker in terms of privacy than anything proposed at scale within the crypto community.

From an accessibility perspective, the best performing system is actually one like Proof of Humanity, where you can sign up using nothing more than your smartphone - though, as we have seen and as we will see, such systems come with all sorts of other trade-offs.

What are the centralization issues with biometric identity verification systems?

There are three types:

1. Centralization risk of top-level governance of the system (especially when the system makes the final top-level decision when the subjective judgments of different participants are inconsistent).

2. Centralization risks unique to systems using dedicated hardware.

3. There is a centralization risk if proprietary algorithms are used to determine who the real participants are.

Any identity system must deal with (1), except perhaps for systems where the set of “accepted” IDs is completely subjective. If a system uses incentives denominated in external assets (e.g. ETH, USDC, DAI), then it cannot be completely subjective, and governance risk is unavoidable.

2. For WorldCoin, the risk is much greater than Proof of Humanity (or BrightID) because WorldCoin relies on specialized hardware, while other systems do not.

3. This is a risk, especially in “logically centralized” systems where only one system does the verification, unless all algorithms are open source and we can guarantee that they are actually running the code they claim to be. For systems that purely rely on users verifying other users (e.g. Proof of Humanity), this is not a risk.

How does WorldCoin solve the problem of hardware centralization?

Currently, the WorldCoin-affiliated entity "Tools for Humanity" is the only organization making Orbs. However, the source code for Orbs is mostly public: you can see the hardware specs in this github repository, and the rest of the source code is expected to be released soon. The license is one of those "shared source code, but technically not open source until four years from now" licenses similar to Uniswap's BSL, and in addition to preventing forks, it also prevents what they consider unethical behavior - they specifically list mass surveillance and three international civil rights declarations.

The team’s stated goal is to allow and encourage other organizations to create Orbs, and over time transition from Orbs created by Tools for Humanity to having some kind of DAO that approves and governs which organizations can create system-approved Orbs.

This design can fail in two ways: It fails to be truly decentralized. This can happen due to a common pitfall of federated protocols: one maker ends up dominating in practice, causing the system to re-centralize. Presumably, governance could limit the number of valid spheres each maker can produce, but this would require careful management, and it puts a lot of pressure on governance to be both decentralized and monitor the ecosystem and respond effectively to threats: a more difficult task than others. A fairly static DAO that only handles top-level dispute resolution tasks.

1. It turns out that it is impossible to make this distributed manufacturing mechanism secure. I see two risks here:

○ Vulnerability against bad Orb manufacturers: Even if an Orb manufacturer is malicious or hacked, it can generate an unlimited number of fake iris scan hashes and give them World IDs.

○ Government restrictions on Orbs: Governments that do not want their citizens to participate in the WorldCoin ecosystem can ban Orbs from entering their country. Furthermore, they can even force citizens to scan their irises and give the government access to their accounts, with no way for the citizens to respond.

To make the system more robust against bad Orb manufacturers, the Worldcoin team recommends that Orbs be audited regularly, verifying that they are built correctly, that key hardware components are built to spec, and that they have not been tampered with after the fact. This is a challenging task: it is basically akin to the IAEA’s nuclear inspection bureaucracy, but for Orbs. The hope is that even a very imperfect implementation of the auditing system will significantly reduce the number of fake Orbs.

To limit the damage done by any bad Orbs that do slip through, it makes sense to have a second mitigation. World IDs registered by different Orb manufacturers should be able to be distinguished from one another. If this information is private and stored only on the World ID holder's device, that's fine; but it does need to be proven on demand. This allows the ecosystem to respond to (inevitable) attacks by removing individual Orb manufacturers or even individual Orbs from the whitelist on demand. If we see some government going around forcing people to scan their eyeballs, then those Orbs, and any accounts they spawned, could be retroactively disabled immediately.

Security issues in general personality certification

In addition to issues specific to WorldCoin, there are a number of issues that affect identity design in general. The main ones I can think of are:

1. 3D Printed Dummies: People could use AI to generate photos of dummies, or even 3D printed dummies, that would be convincing enough to be accepted by the Orb software. Even if only one group did this, they could generate an unlimited number of identities.

2. Possibility of selling IDs: Someone could provide someone else's public key instead of their own when registering, giving that person control of their registered ID in exchange for money. This appears to have already happened. In addition to selling, IDs can also be rented for short-term use in one application.

3. Phone Hacking: If a person’s phone is hacked, the hacker can steal the key that controls their World ID.

4. Forced ID theft: A government can require its citizens to verify themselves when they present a QR code belonging to the government. In this way, a malicious government can obtain millions of IDs. In biometric systems, this can even be done covertly: a government can use obfuscated spheres to extract the world's ID cards from everyone entering its country at passport control.

1. Specific to biometric systems. (2) and (3) are common to both biometric and nonbiometric designs. (4) is also common to both, although the techniques required are quite different in the two cases; in this section, I will focus on the issues in the biometric case.

These are all fairly serious weaknesses. Some have already been addressed in the existing protocol, some can be addressed through future improvements, and some appear to be fundamental limitations.

What should we do when faced with hypocritical people?

For WorldCoin, this is much less risky than systems like Proof of Humanity: an in-person scan can check many features of a person and is much harder to fake than a mere deepfake video. Specialized hardware is inherently harder to fool than commodity hardware, which in turn is harder to fool than the digital algorithms that verify pictures and videos sent remotely.

Could someone eventually trick specialized hardware with 3D printing? Probably. I expect that at some point we’ll see a growing tension between the goals of keeping mechanisms open and keeping them secure: open source AI algorithms are inherently more susceptible to adversarial machine learning. Black-box algorithms are more protected, but it’s hard to say that black-box algorithms haven’t been trained to contain backdoors. Perhaps ZK-ML techniques can give us the best of both worlds. Although at some point in the more distant future, even the best AI algorithms might be fooled by the best 3D-printed dummies.

However, from my discussions with the Worldcoin and Proof of Humanity teams, it seems that neither protocol has seen serious deepfake attacks yet, for the simple reason that it is fairly cheap and easy to hire real low-wage workers to register on your behalf.

Can we prevent the sale of IDs?

In the short term, preventing this outsourcing is difficult because most of the world doesn’t even know about the Proof of Identity protocol, and if you tell them to hold up a QR code and scan their eyes for $30, they’ll do it. Once more people realize what the Proof of Identity protocol is, a fairly simple mitigation becomes possible: allow people with registered IDs to re-register, cancelling the previous ID. This makes “ID sales” much less credible, because the person who sold you an ID can re-register, cancelling the ID they just sold. However, getting to this point requires the protocol to be very well known, and Orbs to be very widely used, making on-demand registration practical.

This is one reason why it’s valuable to integrate UBI Coin into an identity proofing system: UBI Coin provides an easy-to-understand incentive for people to: (i) learn about the protocol and register, and (ii) immediately re-register if they registered on behalf of someone else. Re-registration also protects against phone hacking.

Can we prevent coercion in biometric systems?

It depends on what kind of coercion we are talking about. Possible forms of coercion include:

● The government scans people’s eyes (or faces, or…) at border controls and other routine government checkpoints, and uses it to register (and often re-register) its citizens

● Government bans Orbs in the country to prevent people from independently re-registering

● After purchasing an ID card, an individual threatened to harm him/her if he/she found that his/her ID card was invalid due to re-registration

● A (presumably government-run) app that requires people to “log in” directly using a public key signature, allowing them to see the corresponding biometric scan, and a link between the user’s current ID and any future IDs they obtain through re-registration. There is widespread concern that this makes it too easy to create a “permanent record” that follows a person throughout their life.

Especially in the hands of unskilled users, it seems quite difficult to completely prevent these situations. Users can leave their country and (re)register on an Orb in a safer country, but this is a difficult process and costly. In a truly hostile legal environment, finding an independent Orb seems too difficult and risky.

What could be done is to make this abuse more annoying and easier to detect. Proof-of-person approaches that require a person to say a specific phrase when registering are a good example: this might be enough to prevent hidden scans, requiring the coercion to be more blatant, and the registration phrase could even include a statement confirming that the person being surveyed knows they have the right to re-register independently and potentially receive UBI coins or other rewards. If coercion is detected, access to the devices used to collectively perform the coerced registration could be revoked. To prevent apps from linking people’s current and previous IDs and trying to leave a “permanent record,” default proof-of-identity apps could lock a user’s key in trusted hardware, preventing any app from using that key directly without an intermediary anonymous ZK-SNARK layer. If governments or app developers want to solve this problem, they’ll need to enforce the use of their own custom apps.

By combining these techniques with active vigilance, it seems possible to keep out regimes that are truly hostile, and to keep honest those that are only moderately bad (which is the case in much of the world). This could be accomplished by projects like Worldcoin or Proof of Humanity maintaining their own bureaucracy, or by revealing more information about how an ID was registered (e.g., in Worldcoin, which orb it came from), and leaving this classification task to the community. Can we prevent IDs from being rented (e.g., by selling votes)?

Re-registering does not prevent renting out your ID. This is OK in some applications: the cost of renting out your right to collect that day's share of UBI Coins will just be the value of that day's share of UBI Coins. But in applications like voting, the ease of selling tickets is a big problem.

Systems like MACI can prevent you from credibly selling your vote, allowing you to cast another vote later by invalidating your previous vote so that no one can tell whether you actually cast such a vote. However, this does not help if the briber controls the key you were given when you registered.

I see two solutions here:

1. Run the entire application within MPC. This also covers the re-registration process: when a person registers with MPC, MPC assigns them an ID that is separate and unlinkable from their proof-of-identity ID, and when a person re-registers, only MPC knows which account to deactivate. This prevents users from proving their actions, as each important step is done within MPC using private information known only to MPC.

2. Decentralized registration ceremony. Basically, implement a protocol similar to the face-to-face key registration protocol, which requires four randomly selected local participants to jointly register someone. This ensures that the registration is a "trusted" process and attackers cannot snoop during the process.

Social graph-based systems may actually perform better here, as they can automatically create native decentralized registration processes as a byproduct of the way they work.

How do biometrics compare to the other leading candidate for identity proofing: social graph-based authentication?

Besides biometrics, the main other contender for proof of personhood so far is social graph-based authentication. Social graph-based authentication systems all work on the same principle: if there is a large set of existing verified identities that attest to the validity of your identity, then you are probably valid and should also be given verified status.

Proponents of social graph-based authentication often describe it as a better alternative to biometrics for the following reasons:

● It does not rely on dedicated hardware, making it easier to deploy

● It avoids a perpetual arms race between manufacturers trying to make dummies and Orbs that need to be updated to reject such dummies

● No need to collect biometric data, better protect privacy

● It’s potentially more friendly to pseudonymity, because if someone chooses to split their internet life into multiple identities separate from each other, both identities can potentially be verified (but maintaining multiple authentic and independent identities sacrifices network effects and is costly, so it’s not something an attacker can do easily)

● Biometric approaches give a binary score of “is human” or “is not human”, which is fragile: people who are accidentally rejected will end up having no UBI at all, and may not be able to participate in online life. Social graph-based approaches can give a more nuanced numerical score, which of course may be a little unfair to some participants, but is less likely to completely “depersonalize” someone.

My take on these arguments is that I basically agree with them! These are real strengths of social graph-based approaches and should be taken seriously. However, it is also worth considering the weaknesses of social graph-based approaches:

● Onboarding: For a user to join a social graph-based system, that user must know someone who is already in the graph. This makes mass adoption difficult and has the potential to exclude entire regions of the world that have little luck during the initial onboarding process.

● Privacy: While social graph-based approaches avoid collecting biometric data, they often end up leaking information about a person’s social relationships, which can lead to greater risks. Of course, zero-knowledge techniques can mitigate this (see, for example, Barry Whitehat’s proposal), but the interdependencies inherent in graphs and the need to perform mathematical analysis on graphs make it more difficult to achieve the same level of data hiding as biometric techniques.

● Inequality: Each person can only have one biometric ID, but a wealthy and well-connected person can leverage their connections to generate many IDs. Essentially, the same flexibility that might allow a social graph-based system to offer multiple pseudonyms to people who really need that functionality (e.g. activists) could also mean that more powerful and well-connected people can get more pseudonyms than less powerful and well-connected people.

● Risk of falling into centralization: Most people are too lazy to take the time to report to an internet application who is real and who is not. Therefore, there is a risk that over time the system will tend to favor “easy” naturalization methods that rely on centralized authorities, and the “social graph” of system users will effectively become the social graph of which countries recognize which people as citizens - giving us centralized KYC and unnecessary extra steps.

Are proofs of personhood compatible with real-world pseudonyms?

In principle, proof of personhood is compatible with all kinds of pseudonyms. The application could be designed in such a way that a person with a single ID proof could create up to five profiles in the application, leaving room for pseudonymous accounts. Even a quadratic formula could be used: N equals a cost of $N². But will they?

However, a pessimist might argue that it is naive to try to create a more privacy-friendly form of ID and hope that it can actually be adopted in the right way, because those in power are not privacy-friendly and if a powerful actor gets a tool that can be used to gain more information about a person, they will use it in that way. Some argue that in such a world, the only realistic approach is unfortunately to throw sand in the gears of any identity solution and defend a world of complete anonymity and digital islands of high-trust communities.

I understand the reasons behind this way of thinking, but I worry that even if it succeeds, it will lead to a world where no one can do anything to offset the concentration of wealth and governance, because one person can always pretend to be 10,000 people. In turn, such points of centralization are also easy for those in power to seize. Instead, I tend to take a moderate approach, where we strongly advocate for identity solutions with strong privacy, perhaps even including "N accounts represent $N²" mechanisms at the protocol level if necessary, and create something with privacy-friendly values ​​that has a chance to be accepted by the outside world.

So...what do I think?

There is no ideal form of personality proof. Instead, we have at least three different paradigms of approach, all with their own unique strengths and weaknesses. A comparison chart might look like this:

Ideally, we should view these three technologies as complementary, and combine them all. Dedicated hardware biometrics have the advantage of being secure at scale, as demonstrated by India’s Aadhaar. They are very weak on decentralization, although this can be addressed by making individual spheres responsible. General purpose biometrics are easily adopted today, but their security is rapidly degrading, and they may only work for another 1-2 years. Social graph-based systems bootstrapped by a few hundred people who are socially close to the founding team will likely face constant tradeoffs, either missing large parts of the world entirely or being vulnerable to attacks within communities they can’t see. However, a social graph-based system bootstrapped by tens of millions of biometric ID holders could actually work. Biometric bootstrapping may work better in the short term, but it will take time for the system to be fully operational.

All of these teams have the potential to make a lot of mistakes, and there are inevitable tensions between commercial interests and the needs of the broader community, so it's important to remain vigilant. As a community, we can and should push all participants beyond their comfort zones in terms of open source technology, requiring third-party audits, even software written by third parties, and other checks and balances. We also need to provide more alternatives in each of these three categories.

At the same time, it’s also important to recognize the work that’s been done: many of the teams running these systems have demonstrated a willingness to take privacy more seriously than almost any identity system run by a government or major enterprise, and this is a success we should build on.

The problem of building an effective and reliable proof-of-personhood system, especially in the hands of someone who is far removed from the existing crypto community, seems quite challenging. I definitely don't envy those who attempt this task, and it may take years to find a formula that works. In principle, the concept of proofs of personhood seems very valuable, and while various implementations have their risks, so does having no proofs of personhood at all: a world without proofs of personhood seems more likely to be a world dominated by centralized identity solutions, money, small closed communities, or some combination of the three. I look forward to seeing more progress on all types of proofs of personhood, and hope to see the different approaches eventually converge into a coherent whole.

#Ai