This content is divided into three parts:

The first is a common mindset in token design.

The second is a classification of tokens, talking more specifically about what tokens are and how we think about developing and enhancing their capabilities.

Finally, there is the technology tree theory, which talks about how to use technology to make our designs more successful.

1. Thinking Pattern

First of all, tokens are there to serve the protocol, they are just a tool, just part of the design process, they should not be the goal. If you want to do something decentralized, then tokens can be part of it because it works well to give people ownership of the protocol and also to keep people aligned.

Three stages of design

In my work with portfolio companies, I’ve identified three phases of a successful design process.

Phase 1: Define goals. A goal is a concise description of a valid protocol outcome, and it should be clear whether it has been achieved through specific design. So we should have a very clear distinction between success and failure. If it is not clear what our goals are, we need to start from scratch and forget about tokens. Ideally, goals are measurable, even if we are not sure how to measure success yet.

Phase 2: Introducing constraints. Generally speaking there are two kinds of constraints, endogenous constraints and exogenous constraints: Endogenous constraints are constraints we choose to simplify the design process because some trade-offs need to be made, or they are trade-offs in themselves. For example, we can choose to restrict interesting features that we like. Endogenous constraints can come from many places, but are usually determined by the designer himself. Exogenous constraints are imposed on you by nature, the state of technology, regulations, and all kinds of things. I will talk about this later.

Phase 3: Designing Mechanisms. Once we have constraints and a goal, we can think explicitly about mechanisms that can satisfy that goal. Now whenever we consider a mechanism, we should really be clear about whether it violates those constraints and whether it gets us closer to that goal. A protocol will be a collection of mechanisms that all push toward a specific goal based on some constraints.

Common pitfalls

(1) Over-emphasis on the token. I’ve touched on this a little bit already, but if you’re always thinking about rewards or token distribution instead of how to maintain consistency among participants in your system, you’re probably not thinking about the protocol, you’re thinking about the token. The token is not the protocol, and the token should not be your goal. It should just be a tool.

How to get out of this trap? Ask yourself: How does the system work without the token? If the system fails completely when the token is removed, then you may have overemphasized the role of the token. If several key parts of the system fail, then the situation is better than the former. Your token is indeed important and necessary for the overall balance, but the system is still coherent without it. So, you should still think back to the goal of the system.

(2) Unlimited design space. In design, you have too many ideas and too many possibilities. You don’t even know where to start because there are so many things you can do. This is usually because the goal is not clear, so you need to refine the goal. It may also be because you lack understanding of the limitations imposed on you by the outside world, or you have not yet accepted these limitations.

If you bring these constraints in, you'll find that the design space shrinks and becomes much clearer. Two questions that are helpful in constraining the design space are to ask yourself: What is the powerful concept you want to build? It could be some deep idea, some advantage, some zeitgeist change, etc. Ask yourself what is this powerful concept? How can you maximize it and focus on it instead of thinking about the whole system first. Another question is: What is the biggest weakness of this design? What keeps you up at night, is it the point where you think it might not work, the point you're worried about, the key weakness, and what constraints can you accept to improve it? This can greatly constrain the design space.

(3) Always let the community do the heavy lifting. When you encounter challenges in designing certain parts of your system, push them all to the community to solve, or expect invisible forces to fill the gaps. You always expect people to find the problem and solve it, which is very risky. Although permissionless systems are popular and have led to many amazing innovations, you cannot predict the actions of the community, and you should not expect them to solve the most obvious problems in your system.

There are a few key questions you should ask yourself, what do we really expect from the community, and what are we giving them? Not asking, are we giving them enough tokens? Ask, what power are we giving them? What capabilities are we giving them? What ownership do they have? Are they given enough power to balance that responsibility?

If you really expect them to fix something, if you expect someone else to be ambitious to add some interesting extensions or fix some components of the system, then you have to ask yourself first, would you build it here? If you can't, because it doesn't have enough room to rise, enough power or enough flexibility, then don't expect others to do it.

Token Taxonomy

This is not a complete list, I have been discussing this with members of the team and I am sure we will revise it soon, but this is just to enumerate all the capabilities we have seen the token demonstrate so far.

Tokens are a tool in protocols, they are a tool and a protocol, more abstractly, they are a data structure. So how do we see this data structure being used in different protocols? They can be divided into five very general categories: payment, voting, stake, metadata, and claiming. I believe that each category will have more solutions over time, and this grouping feels more intuitive to me at least.

Payment

First, as an internal currency for a community or project. It is different from traditional payment methods such as US dollars because it exists within a specific community, and the community has control over the currency. They can use monetary policy and other means on the internal currency, such as this currency should be stable, it should be pegged to the value of some other specific assets, and maybe they mint or burn it according to specific, community-wide goals.

Second, and probably the most common and easiest to understand way to use cryptocurrency for payment, is as a network resource, Ethereum and Bitcoin also fall into this category. You pay for computing power, storage, or some other cryptocurrency network resource. We have EIP1559, staking, liquidity, etc. to determine how tokens can be used to calculate different resources within the system, especially computing resources.

The third type of payment token exists as a game currency. For example, games, resources, or some protocol resources need to be stable and priced, because you want to use the system, and these resources are stable, so the token price also needs to be relatively stable. It doesn't matter whether it is in stable supply, because you only use it to implement a specific part of the application.

So where do stablecoins fit in? Of course, a stablecoin can be used as payment in all three ways. But what makes a stablecoin a stablecoin is the mechanism behind it that stabilizes it, so stablecoins generally fall into the ownership category.

Ownership

There are generally two types of ownership, on-chain (deposit) and off-chain (ownership)

The first type of on-chain (deposit) deposit tokens represent ownership of other tokens, an example is the Uniswap LP token, which is an ERC 20 in V2 and an NFT in V3. The stablecoin DAI from the MAKER protocol is also an on-chain deposit because you or the vault holders use it to claim their underlying collateral. So a deposit token means that it can be used to claim other tokens in an off-chain environment.

The second type of token is one that represents ownership of some off-chain asset, so this could be something like a real-world asset token, a real estate token, or something like that. A more modern example is what are now called redeemables, where a token can be redeemed for a physical object. For example, you can exchange an NFT for a piece of art, where this NFT represents ownership of a yard. There are even some fun exchanges if you want. You can control an NFT with a physical object, and control ownership of subsequent NFTs through some digital function like a chip.

vote

Voting can be used to fund projects, allocate resources, make payments or transfers as a group, and perform software upgrades. It can also be used as a measure of social consensus, such as selecting a leader to decide on the future plans of a project.

Staking

Tokens can be designed to be entitled to rewards through smart contracts, there is no legal agreement here, but the operation of the mechanism means that the token will benefit from some kind of on-chain activity. An example is DroneLink, if DroneLink works well, the many DRONE token holders do their job and the system works properly, then they will benefit from some rewards, this is the way the smart contract is, it is the way the protocol is designed to reward good management by the community.

You can also make tokens as a result of a legal agreement that entitles you to a return. You can make a token that represents a share of equity or equity in a company, of course there will be various legal requirements and restrictions.

Tokens are also used to underwrite risk in exchange for a return. MAKER uses this principle, if there are losses in the MAKER protocol, more MAKER tokens will be created, which dilutes the value held by MAKER holders. By holding MAKER tokens, holders hold some risk, which is part of what drives MAKER holders to advance community building. If they want to see their investment increase in value, they need to support the development of this system.

Metadata

First, tokens represent membership, which determines whether you can access a specific space, whether you are in a specific community, or whether you are in some groups. The protocol or some third-party written tools can take advantage of this membership attribute in any way, which is permissionless. For example, some NFT communities can decide that only people who hold tokens can join, such as providing specific functions to people who hold this token. Membership is an interesting type of metadata provided by the token.

Secondly, tokens also represent reputation. Some people are discussing whether reputation should be transferable, and I personally think it probably shouldn't. But it can be fungible in some cases and non-fungible in other cases. If it refers to your achievements, it may be non-fungible; if it refers to the source of information, or credit, or different types of credit scoring systems, it may be fungible. It's a continuous data, so it's a kind of metadata.

Again, tokens also represent identities or references. ENS is an example of this, ENS names can point to addresses and can be updated, unlike the DNS system.

Off-chain data can be a kind of metadata. One example is an off-chain KYC or some kind of verifiable certificate. Another good example is a diploma or academic qualification. Someone hands you this certificate and then it's publicly visible, traceable, and authentic. We haven't seen many cases where permissions and capabilities are represented on the chain. For example, some entity explicitly grants you permissions, such as the ability to call a function, change a piece of code, or transfer something on the chain. You can even use tokens as interfaces, and we have seen examples of this. In the token URI, not only can you put SVG data, you can also put an entire HTML webpage in it, and you can even put a little JavaScript. You can put an interface in the nft and control the interface, or you can embed the interface into the objects that people own and transfer.

An interesting example is BEEP3R, you mint an NFT with text, and then you can broadcast text to other BEEP3R holders by owning it. The text is displayed on a small image of the BEEP3R. When you have a BEEP3R machine, you can also send messages directly to other BEEP3R machine holders, just like using XMTB alone.

So what does this token do? It's a membership token, and with this token you can receive messages. Any wallet interface that can correctly represent the animation URL can display any messages you receive as long as it supports this standard.

It's also an identity token because as a BP holder you can receive and send messages. So this stuff only happens in that set. It's also an identity token because they message you with your BP's token ID. It also exists as an interface to view information related to that NFT.

3. Technology Tree Theory

We can see that some areas are already well developed, such as tokens as payment and network resources, while some areas are still not well developed, such as interfaces, metadata, etc. So why is this the case? I don't have a complete answer, but I think it may be related to the technology tree, which is of course far from complete.

My question is, why do some products appear in certain periods, and why do some products appear longer than others? Take the lending protocol as an example. It is hard to imagine that the lending protocol can work without stablecoins. This is because when you lend debt in a lending protocol, you want to represent it with a stable asset because the price of this asset can be predicted, so we need stablecoins before we can really have a lending protocol.

Similarly, we have lending protocols that need AMMs, because if you want to use lending protocols for leverage, especially early simple lending protocols, you need to be able to borrow assets, such as stablecoins. If you want to exchange that stablecoin for that asset very quickly, and you want more exposure, then you need an AMM. Lending protocols did not develop until we had functioning AMMs and stablecoins.

But how do you get functioning AMMs and stablecoins? It's hard to do that without an interoperable token standard because stablecoins, AMMs, and all the systems around them need to understand how other projects interface with them. And to have ERC20 tokens, you need fully programmable smart contracts. You might not actually need them, but that's how they first appeared on Ethereum because Ethereum was launched without the ERC20 token standard. We need full programmability to be able to leave enough open design space, which of course can be discussed further. But in summary, I think there is a technology tree and certain technologies are prerequisites for other technologies.

There are two questions here: What are the key technologies that will unlock future applications and protocols? In other words, what technologies do we need to develop useful reputation systems or decentralized and trustless interfaces? And the second question is a bit like the first question in reverse, which applications and protocols will be unlocked by the upcoming technologies?

For example, account abstraction, EIP 4844, vertical trees, zero-knowledge machine learning, etc. These questions are interesting because if we can foresee the arrival of specific technologies that can alleviate or introduce design constraints, how will this change our design? If specific technologies can alleviate constraints, should we invest energy in developing them?

If you think about things as a tech tree, it might help us reason about what's coming or what you need to get to a set of constraints that you want. So, tying it back to my original point about limitations, I think new technologies mitigate the limitations that we faced before. For example, if there was no ERC20 standard, then the limitation on any AMM or stablecoin design would be that it would either need to introduce a standard or be able to cope with a variety of different designs.

Imagine designing a general purpose AMM without using a specific token standard, it would be very, very difficult. I think that would be an almost insurmountable limitation, but having interoperability standards means that we can directly support ERC20 tokens, which constrains the design space and makes it possible.

If we can anticipate what technologies will emerge in the future, how will that affect the constraints of our protocol design? If we have specific goals, or specific constraints, what are the technologies we need? Technologies that will be able to alleviate those constraints and make those goals possible again through new mechanisms.