God of V: ​​Breaking the conventional thinking of the relationship between layer1 and layer2 of the blockchain

Foreword: God V believes that although layer 1 should remain simple and stable in the long run, layer 2 focuses on more innovation. This is a good idea. However, in the short term, layer 1 is not powerful enough. To achieve this perfect layered relationship between layer 1 and layer 2, we must first make layer 1 strong enough, and this strong enough is to make the layer 1 protocol reach a certain level of functionality. . The so-called certain level is the minimum functional level to achieve the perfect complementary relationship between layer 1 and layer 2. God V used the metaphor of "function escape speed". So, what are the minimum features that layer 1 must have? These include a programming language that can verify anything that needs to be verified; rich statefulness (such as the ability to authorize changes to the state of tokens without fully releasing them); data scalability and low latency, etc. This article was translated by "SIEN" from the "Blue Fox Notes" community.

There is a general idea in the field of blockchain: Blockchains should be as simple as possible, because they are difficult to change infrastructure. Once destroyed, they will cause great harm, and more complex functions should be built on top of the layer 2 protocol. Form: status channel, Plasma, Rollup, etc. Layer 2 is a place for continuous innovation, while layer 1 should maintain stability and maintenance, and only make major changes in emergency situations (such as preventing the encryption of the base layer from being broken by a quantum computer, in which case a one-time major Changes are possible).

This idea of ​​a separation layer is a very good idea, and in the long run, I strongly support it. However, this idea ignores an important point: although layer 1 cannot be very powerful, but because greater capabilities mean greater complexity and therefore greater vulnerability, layer 1 must also be powerful enough, So that it is possible to build the layer 2 protocol on it first.

Once the layer 1 protocol reaches a certain level of functionality, I call it "function escape speed", and at this time, you can do everything else on this basis without changing the base layer. However, if layer 1 is not powerful enough, then you can talk about filling the gap with the layer 2 system, but the reality is that without the whole set of trust assumptions that layer 1 is trying to get rid of, there is actually no way to build these systems.

This article discusses some of the meanings of the minimum features that make up "function escape velocity".

A programming language

It must be possible to execute customized user-generated scripts on-chain. This programming language can be simple and doesn't actually require high performance, but it needs to have at least this level of functionality: being able to verify anything that might need to be verified. This is important because the layer 2 protocol to be built on top of the base layer requires some kind of verification logic, which must be executed in some way by the blockchain.

You may have heard of Turing completeness. To this term, "layman intuition" is that if a programming language is Turing-complete, then it can do anything that a computer can theoretically do. A program that is programmed in a Turing-complete language can be translated into an equivalent program in any other Turing-complete language. However, it turns out that we just need something lighter: we can limit programming to no loops, or we can guarantee that the program terminates after a certain number of steps.

Rich state

Not only is it important to have a programming language, but also how exactly that programming language is integrated into the blockchain is also important. In a more limited approach to integrable languages, if it is used for pure transaction verification: When you send tokens to an address, the address represents the computer program P, which can be used to verify transactions that send tokens from this address .

That is, if you send a transaction with a hash value of h and the signature you provide is S, then the blockchain will run P (h, S). If the output is TRUE, then the transaction is valid. Generally, P is a validator for a cryptographic signature scheme, but it can perform more complex operations. It is important to note that in this model, P cannot access the destination of the transaction.

However, this "pure function" approach is not enough. This is because this purely function-based approach is not enough to implement the many types of layer 2 protocols that people actually want to implement. It can handle channels (and channel-based systems such as Lightning Network), but it cannot implement other extensible technologies with strong attributes, it cannot be used to lead systems with more complex state concepts, and so on.

In order to briefly explain what the pure function paradigm can't do, consider a savings account with the following functions: There is an encryption key k, k can initiate a withdrawal, and if a withdrawal is initiated, use the same within the next 24 hours The key k can cancel the withdrawal. If the withdrawal has not been cancelled within 24 hours, anyone can "poke" the account to complete the withdrawal.

The idea is that if the key is stolen, the account holder can prevent the thief from withdrawing funds. Stealers can certainly prevent legitimate holders from obtaining funds, but such attacks will be unprofitable for the stealers, so they may not do such things.

Unfortunately, this technique cannot be implemented with pure functions alone. The problem is: there needs to be some way to change the token from a "normal" state to a "waiting for withdrawal" state. Program P cannot access the destination. Therefore, any transaction that can authorize the transfer of tokens to the state of waiting for withdrawal can also authorize the immediate stealing of these tokens; the program P cannot distinguish the difference.

This ability to change the state of tokens without having to fully release them is important for many applications, including the layer 2 protocol. Plasma itself fits this paradigm of "authorization, finalization, cancellation": Withdrawal from Plasma must be approved, and then there is a 7-day challenge period, if the correct evidence is provided, the withdrawal may be cancelled .

Rollup also needs this attribute: the tokens within the rollup must be controlled by a program that continuously tracks the state root R, and if a validator P (R, R ', data) returns TRUE, change R to R' , But in this case it just changes the state to R ', it does not release the token.

The ability to change the authorization status without having to fully release all the tokens in an account is what I call "rich statefulness." It can be implemented in many ways, some based on UTXO, but without it, the blockchain is not enough to implement most layer 2 protocols without including trust assumptions (for example, a group of people get collective trust to execute these rich state programs) . (Blue Fox Note: that is, no need to trust third parties)

Please note: Yes, I know that if P can access h, then you can include the target address as part of S, then check against h, and limit state changes in this way. However, there may be a programming language whose resources are too limited or restricted to actual execution. Surprisingly, this is usually true in blockchain scripting languages. (Blue Fox Note: Here h is the hash value and S is the signature)

Adequate data scalability and low latency

It turns out that Plasma and channels and other layer 2 protocols that are completely off-chain have some fundamental weaknesses that make them unable to fully replicate the capabilities of layer 1.

I explore here in detail. In summary, these protocols require a method of judging the situation. Some of these participants maliciously do not provide the data they promise to provide, and because the data release is not globally verifiable, these judgment games are unstable in game theory. (You can't know when the data will be released unless you have downloaded it yourself).

Channel and Plasma cleverly address this instability by adding additional assumptions, especially assuming that for each state, one participant is interested in the state not being modified by mistake (usually because it represents the generation it owns) Currency), so it can be trusted for its own benefit. However, this is far from universal. For example, systems like Uniswap have a large "central" contract that is not owned by anyone, so they cannot be effectively protected by this paradigm.

There is a way to solve this problem, where the layer 2 protocol publishes a very small amount of data on the chain, but calculates completely off-chain. If the data is guaranteed to be available, then off-chain calculations are feasible, because judging who calculates correctly and who does not calculate correctly is a game theory stable game (or can be completely replaced by SNARKs or STARKs).

This is the logic behind ZK rollup and optimistic rollup. If the blockchain allows publishing and ensures the availability of a reasonable amount of data, even if its computing power is very limited, then the blockchain can support these layer 2 protocols and achieve a high level of scalability and functionality.

How much data does the blockchain need to process and secure? Ok. It depends on how much TPS you want. With rollup, you can compress most activities to about 10-20 bytes per transaction, so 1kb / sec, there can be about 50-100 TPS, 1MB / sec, about 50,000-100,000 TPS, and so on.

Fortunately, Internet bandwidth continues to grow rapidly, and Moore's Law has not seen a deceleration in terms of calculations. Therefore, increasing the scalability of data without increasing the calculation load is very important for blockchains. Feasible path.

Note that not only data capacity is important, but data latency is also important, that is, having a low block time. Layer 2 protocols such as rollup (or, for that matter, Plasma) only give any security guarantees when the data is actually posted on the chain. Therefore, the time it takes for data to be reliably included on the chain (ideally "to achieve finality") refers to the time between when Alice sends a payment to Bob and Bob is convinced that the payment is included.

The block time of the base layer sets the delay time of any content, and the confirmation of these contents depends on it being included in the base layer. This problem can be solved with on-chain secure deposits (also known as "bonds") at the cost of inefficient capital, which is inherently imperfect because malicious actors can deceive countless people by sacrificing a deposit pledge.

in conclusion

"Keep layer 1 simple and make up on layer 2" is not a universal answer to the blockchain's scalability and functionality issues. Because it doesn't take into account that the layer 1 blockchain itself must have sufficient scalability and functionality to make building it actually possible (unless your so-called "layer 2 protocol" is only a trusted intermediary).

However, beyond a certain point, any layer 1 feature can indeed be duplicated on layer 2, and in many cases it is a good idea to improve scalability by doing so. Therefore, in the short term, we need layer 1 to develop in parallel with layer 2. In the long run, we need to focus more on layer 2.


Risk warning: All articles of Blue Fox Note can not be used as investment advice or recommendations. Investment is risky. Investment should consider personal risk tolerance. It is recommended to conduct in-depth inspection of the project and make good investment decisions.