Today, a new concept quietly emerged in the Ethereum research forum: Proof of Validator.
This protocol mechanism allows network nodes to prove that they are Ethereum validators without revealing their specific identities.

What does this have to do with us?
Generally speaking, the market is more likely to focus on the surface narratives brought about by certain technological innovations on Ethereum, and rarely delve into the technology itself in advance. For example, Ethereum’s Shanghai upgrade, merger, transition from PoW to PoS, and expansion, the market only remembers the narratives of LSD, LSDFi, and re-staking.
But don’t forget that performance and security are the top priorities of Ethereum. The former determines the upper limit, while the latter determines the bottom line.
It can be clearly seen that on the one hand, Ethereum has been actively promoting various expansion plans to improve performance; but on the other hand, on the road to expansion, in addition to cultivating its own internal strength, it also needs to guard against external attacks.
For example, if the verification node is attacked and the data is unavailable, then all the narratives and expansion plans built on the Ethereum staking logic may be affected. However, this impact and risk are hidden in the background, and end users and speculators are difficult to detect, and sometimes they don’t even care.
The Proof of Validator discussed in this article may be the key security puzzle on the road to Ethereum's expansion.
Since capacity expansion is inevitable, how to reduce the risks that may be involved in the expansion process is an unavoidable security issue, and it is also closely related to every one of us in the industry.
Therefore, it is necessary to understand the full picture of the newly proposed Proof of Validator. However, since the full text in the technical forum is too fragmented and hardcore, and involves many expansion plans and concepts, DeepChao Research Institute integrates the original post and sorts out the necessary relevant information, and interprets the background, necessity and possible impact of the Proof of Validator.
Data Availability Sampling: A Breakthrough for Capacity Expansion
Don’t be in a hurry. Before formally introducing Proof of Validator, it is necessary to understand the current logic of Ethereum’s expansion and the risks it may contain.
The Ethereum community is actively promoting multiple expansion plans, among which Data Availability Sampling (DAS) is considered the most critical technology.
The principle is to divide the complete block data into several "samples". Nodes in the network only need to obtain a few samples related to themselves to verify the complete block.
This greatly reduces the storage and computing power of each node. To use an easier-to-understand example, this is similar to our sampling survey. By visiting different people, we can summarize the overall situation of the entire population.

Specifically, the implementation of DAS is briefly described as follows:
Block producers split block data into multiple samples.
Each network node only gets a few samples that it is concerned about, rather than the complete block data.
Network nodes can obtain different samples to randomly sample and verify whether the complete block data is available.

Through this sampling, even if each node only processes a small amount of data, the data availability of the entire blockchain can be fully verified together. This can greatly increase the block size and achieve rapid expansion.
However, this sampling scheme has a key problem: where are the massive samples stored? This requires a complete set of decentralized networks to support it.
Distributed Hashed Table: Home of the Sample
This gives the distributed hash table (DHT) an opportunity to show its prowess.
DHT can be seen as a huge distributed database that uses hash functions to map data into an address space, with different nodes responsible for accessing data in different address segments. It can be used to quickly find and store samples in a large number of nodes.
Specifically, after DAS divides the block data into multiple samples, it needs to distribute these samples to different nodes in the network for storage. DHT can provide a decentralized method to store and retrieve these samples. The basic idea is:
Using a consistent hashing function, samples are mapped into a huge address space.
Each node in the network is responsible for storing and providing a data sample within an address range.
When a sample is needed, the corresponding address can be found through hash, and the node responsible for the address range can be found in the network to obtain the sample.

For example, according to certain rules, each sample can be hashed into an address, node A is responsible for addresses 0-1000, and node B is responsible for addresses 1001-2000.
Then the sample with address 599 will be stored in node A. When this sample is needed, the address 599 is searched through the same hash, and then the node A responsible for this address is searched in the network to get the sample from it.
This method breaks the limitations of centralized storage and greatly improves fault tolerance and scalability. This is exactly the network infrastructure required for DAS sample storage.
Compared with centralized storage and retrieval, DHT can improve fault tolerance, avoid single point failures, and enhance network scalability. In addition, DHT can also help resist attacks such as "sample hiding" mentioned in DAS.
DHT pain point: Sybil attack
However, DHT also has a fatal weakness, which is the threat of Sybil attacks. Attackers can create a large number of fake nodes in the network, and the surrounding real nodes will be "flooded" by these fake nodes.
To use an analogy, an honest vendor is surrounded by rows of counterfeit goods, making it difficult for users to find the real thing. In this way, the attacker can control the DHT network and make the sample unavailable.

For example, to obtain a sample of address 1000, you need to find the node responsible for this address. However, after being surrounded by thousands of fake nodes created by attackers, the request will be continuously directed to fake nodes and cannot reach the node that is actually responsible for the address. As a result, the sample cannot be obtained, and both storage and verification fail.
To solve this problem, a high-trust network layer needs to be established on DHT, which is only participated by validator nodes. However, the DHT network itself cannot identify whether a node is a validator.
This seriously hinders the expansion of DAS and Ethereum. Is there any way to resist this threat and ensure the trustworthiness of the network?
Proof of Validator: ZK solution to safeguard the security of expansion
Now, let’s get back to the main point of this article: Proof of Validator.
In the Ethereum technical forum, today George Kadianakis, Mary Maller, Andrija Novakovic, and Suphanat Chunhapanya jointly proposed this plan.
Its overall idea is that if we can figure out a way to make the DHT expansion plan in the previous section only allow honest verifiers to join DHT, then malicious actors who want to launch a witch attack must also pledge a large amount of ETH. Significantly increases the economic cost of doing evil.
To put this concept in a more familiar way, I want to know that you are a good person and be able to identify bad people without knowing your identity.

In this kind of proof scenario with limited information, zero-knowledge proof can obviously come in handy.
Therefore, Proof of Validator (PoV) can be used to establish a highly trusted DHT network consisting only of honest verification nodes, effectively resisting Sybil attacks.
The basic idea is to let each verification node register a public key on the blockchain, and then use zero-knowledge proof technology to prove that they know the private key corresponding to this public key. This is equivalent to showing their identity to prove that they are a verification node.
In addition, for the anti-DoS (Denial of Service) attack of the verification node, PoV also aims to hide the identity of the verifier on the network layer. That is, the protocol does not want the attacker to be able to tell which DHT node corresponds to which verification node.
So how do we do it? The original post used a lot of mathematical formulas and derivations, so I won’t go into detail here. We’ll give a simplified version:

In the specific implementation, Merkle tree or Lookup table is used. For example, Merkle tree is used to prove that the registered public key exists in the Merkle tree of the public key list, and then it is proved that the network communication public key derived from this public key is matched. The whole process is implemented using zero-knowledge proof and will not disclose the actual identity.
Skipping these technical details, the final effect of PoV is:
Only nodes that have passed identity verification can join the DHT network, which greatly increases its security and can effectively resist Sybil attacks and prevent samples from being deliberately hidden or modified. PoV provides a reliable basic network for DAS, indirectly helping Ethereum achieve rapid expansion.
However, PoV is still in the theoretical research stage, and there is still uncertainty as to whether it can be implemented.
However, several researchers in the thread have conducted experiments on a small scale, and the results show that PoV is quite efficient in proposing ZK proofs and in receiving proofs by verifiers. It is worth mentioning that their experimental equipment is just a laptop with an Intel i7 processor from 5 years ago.

Finally, PoV is still in the theoretical research stage, and there is still uncertainty about whether it can be implemented. But in any case, it represents an important step for blockchain to achieve higher scalability. As a key component in the Ethereum expansion roadmap, it deserves continued attention from the entire industry.
PoV original post address: link
