撰写: SHAYON SENGUPTA、TUSHAR JAIN
Compiled by: TechFlow
In April 2022, we published our paper on Physical Proof of Work (PoPW) networks (now more colloquially referred to as “Decentralized Physical Infrastructure Networks”, or “DePIN” for short). In that paper, we wrote:
PoPW networks incentivize people to do verifiable work to build real-world infrastructure. Compared to traditional forms of capital formation used to build physical infrastructure, these permissionless and trustlessly neutral protocols:
Infrastructure can be built much faster – 10-100 times faster in many cases;
More closely meet native market needs;
The cost-effectiveness may be higher.
We were the first to invest primarily in this thesis, and since then we have seen DePIN networks pop up across broad categories such as energy, logistics, mapping, telecom, etc. More recently, we have also seen more targeted categories emerge around purpose-built resource networks, specifically for digital commodities such as compute, storage, bandwidth, and consumer data aggregation. Behind each network lies a structural cost or performance arbitrage that is uniquely enabled by crypto-native capital formation.
There is a large overlap of design patterns and best practices in the DePIN network. There are several key questions for founders and the community to consider when thinking about network design. Should the network hardware be consumer-oriented, or should the network be bootstrapped for professional installers? How many nodes are needed to get the first paying customer online? 10? 1,000? Should the network be fully decentralized, or should it be managed through trusted intermediaries?
These decisions must be made early in the design of the network, and they need to be correct. Hub issues often determine the success or failure of the DePIN network, and small changes at the hardware level, token level, distribution level, or demand activation layer can have a huge impact on the success or failure of the network.
At Multicoin, we remain optimistic about DePINs and expect many new, class-defining networks to launch in the coming years. This article will explore the trade-offs we see DePIN founders and communities most often consider, hoping to help the next generation of DePIN founders and communities design networks more successfully. We propose three necessary considerations for building a DePIN: hardware, threshold scale, and demand generation. In each area, we explore the main issues that influence key design decisions and outline their broad implications for token design.
Hardware Considerations
Most DePIN networks coordinate physical infrastructure—real hardware. However, this is not always the case. Some networks manage virtual resources such as compute, storage, or bandwidth (these networks are sometimes called "decentralized virtual infrastructure networks" or "DeVIN"). But for the purposes of this section, we'll assume that your network has real hardware, so you'll need to answer some key network design questions.
Who makes the hardware?
DePIN networks that make and distribute hardware have more control over the supply side of the network. They can also build direct relationships with contributors (which sometimes leads to stronger communities). However, over time, these companies run the risk of becoming a bottleneck or single point of failure in the manufacturing and distribution process, which can limit the network's ability to scale.
An alternative to making and distributing your own hardware is to open source your hardware specifications and ask the community to build it for you. This allows the founders and community to scale the supply side of the network while spreading supply chain risk. The problem with this approach, of course, is that incentivizing third-party manufacturers to build hardware for a new market is difficult and expensive. Another aspect you also have to consider is hardware quality and support. Assuming you do succeed in building a strong ecosystem of hardware manufacturers, you also need to maintain quality in terms of equipment and support.
Helium is an interesting case study. They first built their own hotspots to help launch the network, then quickly open sourced their hardware specifications and incentivized a strong ecosystem of third parties to build hardware for them. Although they have a large number of third-party hardware manufacturers, Helium suffered from severe supply chain bottlenecks during a critical growth phase for the network, with some manufacturers providing poor support.
On the other hand, Hivemapper (decentralized mapping of indoor spaces using smartphones) chose to build and distribute its own hardware cameras. This gives them full control over hardware production, allowing for rapid iteration of the camera's firmware and enabling passive video uploads more quickly, which in turn accelerates map coverage and the commercial value of the data. As a trade-off, having one company control hardware production has a centralized impact on the supply chain, which can make it more fragile.
Summary - We noticed that the DePIN network scaled much faster when the hardware specifications were open source and deployed as permissionless. It certainly makes sense to open up hardware development to decentralize and scale the network when the network is mature enough. However, it is wise to control the hardware in the early stages to ensure quality and support.
Is your hardware active or passive?
Some DePIN networks are set-and-forget, while others require more ongoing user engagement.
For example, in the case of Helium, it takes about 10 minutes from unpacking to setting up the hotspot. After that, the device just sits there quietly and provides passive coverage to the network without the user doing much extra work. On the other hand, networks like Geobyte (decentralized mapping of indoor spaces using smartphones) require users to actively do something to create value (capturing video of indoor spaces using phone sensors). For supply-side contributors, the time invested in active networks explicitly sacrifices time that could be devoted to other income-generating activities or simply living. Therefore, contributors to active networks must earn more (in most cases) through tokens or network design to justify their time and opportunity cost. This also means that active networks reach threshold scale (which we will discuss in detail below) slower than passive networks due to their design.
The positive side is that because active DePIN networks require some level of ongoing participation, they typically have contributors who are more engaged and knowledgeable about the network. This, in turn, means that active networks are also limited by the number of people willing and/or able to contribute.
Conclusion - We noticed that it is easier to scale a DePIN network if contributors pay a one-time cost (time or money) at the beginning, rather than an ongoing cost; passive networks are easier to set up and therefore easier to scale.
Becoming an active network is not a death sentence, they just require creative thinking and incentive design. For example, active networks like Geobyte, Dronebase, FrodoBots, and Veris are more like “perpetual games” than traditional infrastructure networks.
How difficult is your hardware to install?
Various DePIN networks vary in the ease of the hardware installation process. On one hand, it can be as simple as plugging the device into a wall socket, while on the other hand, it may require a professional installer.
On the easy end of the difficulty spectrum, gamers can connect their GPUs to the Render Network, a distributed compute network, by simply running a bash script, which is ideal because compute networks require tens of thousands of geographically distributed GPUs to properly serve datacenter offload.
In the middle of the difficulty spectrum, a Hivemapper camera takes 15-30 minutes to install. Hundreds of these vehicles are needed in a given geographic area to build a robust real-time map, so this installation must be a simple upfront time investment and easy to operate afterwards.
In contrast, on the difficult end of the difficulty spectrum, XNET is building a carrier-grade CBRS wireless network. Their network radios need to be installed by professionals at local ISPs and require opt-in from commercial property owners. But, despite this, their network is expanding because it only takes a few such arrangements to cover a city area and can serve both carrier offload and data roaming use cases.
In summary - the speed at which your network scales is directly affected by how easy it is to install the hardware. If your network requires thousands of devices around the world, then you need to make the hardware installation as simple as possible. If your network can scale quickly with only a few nodes, then you may choose to focus on attracting professional contributors rather than retail contributors to the network. Generally speaking, DePIN networks scale fastest when the installation complexity is low enough that ordinary people can easily become contributors.
Impact of Token Design on Hardware
When you think about building a network, early supply-side contributors are one of the most important stakeholders you need to consider. Depending on the hardware decisions you make, the configuration of supply-side contributors can tilt towards regular people, professionals, or "semi-professionals" somewhere in between.
We observe that professional contributors tend to think about their immediate dollarized gains in the early days of the network and are more likely to cash out their tokens early. On the other hand, early, regular retail contributors are more likely to focus on long-term results and are more likely to want to accumulate as many tokens as possible without considering short-term price fluctuations.
For networks with a large base of professional contributors, alternatives to traditional spot token incentives such as locked tokens or dollarized revenue share protocols can be explored.
Regardless of the supply side contributor set, as it matures, the supply side of the network must cover capital investment and operating costs in dollar terms. Finding a balance between balancing incentives for early adopters during the launch phase and ensuring tokens are rewarded to contributors in later network maturity stages is a tricky but important endeavor.
Threshold size considerations
We use the term “threshold scale” to describe when the supply side of a network begins to become commercially viable for the demand side of the network. The DePIN network is inherently disruptive because tokens can be used to reward early contributors for deploying infrastructure to reach threshold scale.
Some networks can serve demand from day one with one or a few nodes (e.g., storage and compute markets), while others need to reach a certain scale to serve their needs (e.g., wireless networks, logistics, and delivery networks). As demand scales by orders of magnitude, so does the minimum set of nodes required to serve this incremental demand.
Is location important?
Some DePIN networks will not gain substantially from physical distribution, while others absolutely require it. In most cases, if a network requires coordination of physical resources, it is location sensitive, so reasoning about the minimum viable coverage becomes a key factor when determining when to perform demand generation.
Some networks are very location sensitive, and some are location agnostic. For example, energy marketplaces like Anode and mapping networks like Hivemapper are very location sensitive. Wireless networks like Helium IOT are less location sensitive because hotspots are large in range. Bandwidth marketplaces like Filecoin Saturn, Fleek, or Wynd are even less location sensitive because they only require general geographic coverage and not nodes in specific locations.
On the other hand, computational markets like Render Network or storage markets like Filecoin are not location sensitive. In these networks, since the entry is not geographically restricted, it becomes easier to guide the supply-side contributor resources to reach the threshold scale.
Summary - We note that if a network is location sensitive, supply-side contributors should be incentivized to contribute to targeted areas to reach a threshold scale, with the goal of unlocking a serviceable market. Once achieved, the network should take a "land and expand" approach and repeat the strategy in other different areas.
Does network density matter?
Building on the above concept of minimum viable coverage, some DePIN networks have a concept of “network density”, which is usually defined as the number of hardware units (or nodes), or the total aggregate units of a particular resource in a particular area.
Helium Mobile is a web3 mobile operator that defines its network coverage as the number of mobile hotspots per neighborhood. Network density is very important to Helium Mobile because the network requires a large density of mobile hotspots to provide continuous coverage in an area.
Teleport is a permissionless ridesharing protocol that defines density as the number of active drivers within a 5-10 mile radius of an urban hotspot. Network density is important to Teleport because no one wants to wait more than 10 minutes for a taxi. However, unlike Helium Mobile, Teleport's hotspots can drive to pick up passengers, so Teleport does not need as high native density as Helium Mobile.
Hivemapper defines network density as the number of mappers in a given city, because the network needs enough mappers in a city to provide constantly updated map data. But Hivemapper doesn't need the same density level as Teleport, because map refreshes can tolerate longer delays than ride hailing.
A simple way to think about density in the context of threshold scale is to think about, at what threshold of contributors in a certain geographic area does a network make its first sale or attract its first demand-side customer? What about the tenth? The hundredth?
For example, XNET, a decentralized licensed mobile operator, might only need 100 large, professionally installed radios to serve a metropolitan area. However, Helium Mobile’s radios are smaller and require many more to cover the same metropolitan area — a Helium Mobile network with a few hundred small cells is of low value, but a network with a hundred thousand cells is of very high value. Helium Mobile’s threshold scale is higher than XNET’s due to its hardware design decisions.
Summary - We note that networks that require higher density require more contributors to reach a threshold scale. Conversely, networks with lower density can leverage more sophisticated hardware and/or specialized contributors.
Impact of Token Design
We observe that networks with higher threshold scales require more token incentives to build the supply side of the network due to some combination of location sensitivity or network density requirements. In contrast, networks with relatively lower threshold scales have more flexibility to set token incentives more conservatively and can allocate them in later threshold scale milestones.
Broadly speaking, there are two common strategies for token distribution: time-based and utilization-based. Time-based strategies work best for networks with a higher threshold size, while utilization-based strategies work best for networks with a relatively lower threshold size. Helium uses a time-based token issuance schedule, while Hivemapper uses a network utilization-based issuance schedule.
Time-based strategies involve issuing a fixed number of tokens to contributors in a given period of time, proportional to their contribution to the network. These strategies are more appropriate if time to market is important for infrastructure buildout and to reach a threshold scale faster than competitors. If the network is not the first to act in a winner-take-all market, a time-based strategy should be considered. (Note that this approach generally requires the network to be able to explicitly distribute hardware through a resilient supply chain.)
Token distribution based on network utilization is a more flexible mechanism that allows for token distribution based on network growth. Reward mechanisms include providing more tokens for network construction in a specific location, at a specific time, or for providing a specific type of resource to the network. The tradeoff here is that while this preserves the option for the network to distribute tokens to the most valuable participants, it creates revenue uncertainty for the supply side, potentially leading to lower conversion rates and higher churn.
For example, Hivemapper has mapped 10% of the US with less than 2% of total token issuance. As a result, they can now very carefully structure bounty challenges to reach threshold sizes in specific areas, continue to expand the map, and improve density in strategic areas.
Demand generation considerations
When DePIN networks reach a threshold size, they can begin selling in earnest to the demand side of the network. This begs the question, who should be doing the selling?
Ultimately, a DePIN network is only valuable if customers can easily access the resources aggregated by the network. Consumers and businesses generally do not want to buy directly from an unlicensed network, preferring to buy from traditional companies. This creates an opportunity for value-added resellers to package network resources into products and services that customers understand and are willing to buy.
A network creator may also choose to operate a network VAR. This company operates on top of the network and owns the customer relationship and everything that goes with it - i.e. product development, sales, customer acquisition and retention, ongoing support and legal agreements for services, etc. The advantage of building a network VAR is that you capture the entire difference between the cost of selling the product (to the customer) and the cost of the original resources provided by the network. This approach makes the network full stack and allows for tighter product iterations because of ongoing feedback from demand-side customers.
On the other hand, you don’t have to become a value-added reseller or build a business on top of the network. You can outsource demand-side relationships to the network ecosystem. This approach allows you to focus on core protocol development, but reducing touchpoints with customers may hinder product feedback and iteration.
Should you become a network value-added reseller or outsource?
Different DePIN teams considered this issue from many angles.
For example, Hivemapper Inc. is currently the primary value-added reseller of the Hivemapper Network. They build their business on top of the network’s mapping data and provide enterprise-grade logistics and mapping data through commercial APIs.
In Helium’s case, the Helium Mobile Network is served by a single value-added reseller, Helium Mobile, which was spun out of Helium Systems Inc., while Helium’s IoT Network is commercialized by a range of value-added resellers, such as Senet, which helps customers deploy hotspots, purchase sensors and coverage, verify packet delivery, and more.
Unlike Hivemapper or Helium, Render Network commercially outsources network resources to public compute customers, who in turn resell these resources to institutions and artists for rendering and machine learning jobs. Render Network itself does not provide proofs of computational integrity, privacy guarantees, or different orchestration layers that handle workloads for specific packages or libraries. These are all provided by third-party customers.
Summary — We note that adding services or trust guarantees can stimulate demand. The network can provide these services on its own, but investing in them too early — before reaching a critical mass — can lead to wasted time, effort, and money. At scale, these services are best handled by third parties who tailor their offerings to the customers they serve.
We also observed that as networks began to scale and commercialize network resources, they often took the following forms:
Phase 1: At or shortly after the first threshold scale milestone, the core team manages all aspects of the demand-side relationship. This ensures that early customers receive the highest quality product possible.
Phase 2: Beyond the first set of threshold scale milestones, the network can begin to build an ecosystem of third parties that resell the resources aggregated by the network. Third parties that handle resource aggregation can enter the network and mediate the relationship between demand and supply.
Phase 3: In a certain stable state, there are many participants who package and sell resources to a wide range of network participants. In this phase, the network becomes a platform for other service companies to enter and directly provide services to customers, purely as a resource layer.
Impact of Token Design
If your network relies on specific parties to scale demand generation, it may be helpful to specify protocol incentives for those network participants. Tokens used for third-party demand generation activities are often milestone-based, with tokens allocated to reward those parties when the network and the third party achieve some common goal. You should always structure token issuance mechanisms with partners so that the value they bring to the network is consistent with the tokens they ultimately receive.
Looking ahead
This article explores the most common questions and considerations we discuss with founders when exploring new DePIN networks.
We expect new, category-defining DePIN networks to emerge in the coming years, and believe that core attributes such as token distribution, hardware, threshold scale, and demand generation are critical and should be fully explored to effectively build supply-side resources and serve demand-side customers. These networks are essentially marketplaces, and each trade-off has a ripple effect that either strengthens their inherent network effects or creates gaps for new entrants to compete.
Ultimately, we see DePIN as a way to reduce the cost of building valuable infrastructure networks through crypto-native capital formation. We believe there is a broad design space for networks that make explicit tradeoffs and provide services in subsets of large-scale markets such as telecommunications, energy, data aggregation, carbon reduction, physical storage, logistics, and delivery.