Vitalik: The layered structure of the blockchain still has shortcomings, and layer 1 and layer 2 need to be developed in parallel in the short term

Written in front:

Ethereum co-founder Vitalik Buterin mentioned in his latest post " Basic Layer and Function Escape Speed " that "Keeping layer 1 simple and using layer 2 to make up for shortcomings" is not a solution to blockchain scalability and functionality The general answer to the question, because this idea does not take into account that the layer 1 blockchain itself must have sufficient scalability and functionality, otherwise the so-called layer 2 protocol is only a trusted intermediary. In this article, Vitalik put forward the concept of "function escape speed". He also stated that in the short term we need to develop layer 1 and layer 2 in parallel, and in the long term we should pay more attention to the development of layer 2.


(Photo from: Flickr)

Here is the translation:

The blockchain world has a common idea: Blockchains should be the simplest, because they are infrastructure that is difficult to change. If it happens, it will cause great harm, and it should be in the form of layer 2 protocol. Establish relatively complex functions at the top of a layer of blockchain, such as: state channels , Plasma , rollup, etc. Layer 2 should be a place of continuous innovation, while layer 1 should be stable, and only have major changes in emergency situations (for example, to prevent the cryptography of basic protocols from being cracked by quantum computers, a major breakthrough change is possible of).

This idea of ​​layer separation is very good, and I strongly support it in the long run. However, this thinking ignores an important point: although layer 1 cannot be too powerful, because more power means more complexity and therefore greater vulnerability, but layer 1 must also be sufficiently powerful In this way, the layer 2 protocol built on it can be really feasible.

Once the layer 1 protocol implements a certain level of functionality, I will call it "feature escape speed", and then, yes, you can do anything else on it without further changing the foundation.

If layer 1 is not powerful enough, you can talk about using the layer 2 system to fill in the gaps, but the reality is that without reintroducing the set of trust assumptions that layer 1 is trying to get rid of, you have no way to build these systems. This article will discuss what is the smallest feature that constitutes a "function escape rate".

A programming language

Must be able to execute custom user-generated scripts on-chain. This programming language can be simple and doesn't actually require high performance, but it needs at least the required level of functionality to verify anything that might need to be verified.

This is important because the layer 2 protocol to be built on requires some kind of verification logic, and this kind of verification logic must be executed in some way by the blockchain.

You may have heard of Turing completeness, and laymen generally think that if a programming language is Turing complete, then it can do anything that computers can theoretically do. Any program written in a Turing-complete language can be translated into an equivalent program in any other Turing-complete language. However, it turns out that we only need something slightly lighter: it can be restricted to programs without loops, or programs that are guaranteed to terminate in specific steps.


This is not only about a programming language, how to accurately integrate the programming language into the blockchain is also important. If a language is used for pure transaction verification, then the way it integrates is even more limited: when you send coins to some address, the address represents a computer program P , which will be used to verify the address from that address Send coins for transactions. That is, if you send a transaction with a hash of h , then you will provide a signature S , then the blockchain will run P(h, S) , and if the output is TRUE, the transaction is valid . Generally, P is a validator for a cryptographic signature scheme, but it can perform more complex operations. Note that in this model, P cannot access the destination of the transaction.

However, this "pure function" approach is not enough. This is because this purely function-based approach is not enough to implement multiple layer 2 protocols that people really want to implement. It can implement channels (and channel-based systems, such as Lightning Network), but it cannot implement other capacity expansion technologies with stronger characteristics, nor can it be used for satellite systems with more complex state concepts, and so on.

As a simple example to illustrate what the pure function paradigm cannot achieve, consider a savings account with the following characteristics: a cryptographic key k can initiate a withdrawal, and if it has made a withdrawal, in the next 24 hours Inside, the same key k can cancel the withdrawal. If the withdrawal has not been cancelled within 24 hours, anyone can "break in" to this account and complete the withdrawal. The purpose is that if the key is stolen, the account holder can prevent thieves from withdrawing funds. Thieves can of course prevent legitimate owners from obtaining funds, but the attack is not profitable for the thieves, so they may not be bothered by this (for an explanation of this technique, see the original paper ).

Unfortunately, this technique cannot be implemented simply by pure functions. The problem is: there needs to be some way to transfer coins from a "normal" state to a "waiting for withdrawal" state. But program P cannot access the destination! Therefore, any transaction that can authorize the transfer of coins to a state of waiting for withdrawal can also authorize the immediate stealing of these coins, that is, P cannot distinguish between the two . The ability to change the state of a coin without fully releasing it is important to many applications, including the layer 2 protocol.

Plasma itself conforms to this "authorization, termination, cancellation" paradigm: the exit operation from Plasma must first be approved, and then there will be a 7-day challenge period, and during this challenge period, if the challenger provides the correct evidence, Exit can be canceled.

Rollup also needs this attribute: The coins in Rollup must be controlled by a program that tracks the state root R If a validator P(R, R', data) returns TRUE, it changes from R to R' , but it only changes The status changes to R' , in which case it will not release the coin.

This authorization state change without the ability to set all coins in a free account is what I call "rich statefulness."

It can be implemented in a variety of ways, some based on UTXO, without it, and without the assumption of trust (for example, a group of collectively trusted workers to execute those rich state programs), the blockchain is insufficient To implement most layer 2 protocols.

Note: Yes, I know that if P has access to h , then you can take the destination address as part of S and compare it with h and then limit the state change in this way. But it is also possible that a programming language resource is too limited (or otherwise restricted) to truly do this. Surprisingly, this is often the case in blockchain scripting languages.

Full data scalability and low latency

It turns out that plasma, channels, and other completely off-chain layer 2 protocols all have some fundamental weaknesses that prevent them from fully replicating the functionality of layer 1 . I have discussed this issue in detail here (Translator's Note: The Chinese version is here ); in summary, these agreements need a way to adjudicate that some parties maliciously fail to provide the data they have promised to provide, and that due to data release They are not globally verifiable (unless you download the data yourself, you don't know when to release the data), these ruling games are theoretically unstable.

Channels and Plasma cleverly bypass this instability by adding additional assumptions, especially assuming that for each state, a participant interested in that state has not been incorrectly modified (usually because it represents The coins they own), so you can trust them. However, this is far from universal. For example, systems such as Uniswap include a large "central" contract, which is not owned by anyone, so they cannot be effectively protected by this model.

There is a way to solve this problem, which is a layer 2 protocol that publishes a small amount of data on the chain, but performs calculations outside the chain.

If the data is guaranteed to be available, then it is possible to perform calculations off-chain, because the game of judging "who calculates correctly and who calculates incorrectly" is theoretically stable (or can be completely replaced by SNARKs or STARKs ), which This is the logic behind ZK rollup and optimistic rollup. If a blockchain allows publishing and guarantees the availability of a considerable amount of data, even if its computing power is still very limited, the blockchain can support these layer-2 protocols and achieve a high level of scalability and functionality.

How much data does the blockchain need to process and guarantee? Well, it depends on how much TPS you require. With the rollup scheme, you can compress most activities to about 10-20 bytes per transaction, so 1 kb per second can provide you 50-100 TPS. 1 mb per second can give you 50,000-100,000 TPS, and so on . Fortunately, the bandwidth of the Internet continues to grow rapidly, and its growth rate does not seem to be slowing down like Moore's Law of Calculation. Therefore, increasing the scalability of data without increasing the computational load is what the blockchain can take. A capacity expansion path!

It is also important to note that it is not only the data capacity that is important, but also the data latency (that is, having a lower block time). A Layer 2 protocol (or Plasma) like rollup only provides any security guarantees when the data is actually published on the chain, so the time required for the data to be reliably included on the chain (ideally "finally determined"), The time required between Alice sending a payment to Bob and Bob's assurance that this payment will be included. The block time of the base layer is the delay time set for the content it contains. This can be solved by secure deposits on the chain (also known as "bond"), but this method is not perfect in itself, because malicious parties can deceive an unlimited number of different people by sacrificing a deposit.

in conclusion

"Keeping layer 1 simple and using layer 2 to make up for the shortcomings" is not a universal answer to the problem of scalability and functionality of the blockchain, because this idea does not take into account that the layer 1 blockchain itself must be sufficiently scalable And functionality, otherwise the so-called layer 2 protocol is just a trusted intermediary. However, it is true that at some stage, any layer 1 feature can be replicated to layer 2 and in many cases doing so is a good idea to improve scalability. Therefore, in the short term we need to develop layer 1 and layer 2 in parallel, and in the long term we need to pay more attention to the development of layer 2.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!


Was this article helpful?

93 out of 132 found this helpful

Discover more


Report | Uncovering the currency circle to quantify team operations

Overview Overview Regardless of the market, the proportion of institutional investors has become a measure of the mat...


MakerDAO's MCD: What changes have you brought?

In the field of DeFi, MakerDAO is an influential project. It generates a stable currency Dai by decentralization. Dai...


Google, Internet Stock and Ethereum

Author: Anthony Bertolino Edit: Summer Source: Unitimes After the Internet bubble dust settled, we witnessed the rise...


FEMA of the United States plans to build a disaster-based insurance platform based on blockchain to simplify the payment process

According to Cointelegraph on November 20th, the US Federal Emergency Management Agency (FEMA) is exploring the const...


Can EOS 'centralization problem be solved by sharding?

The issue of centralization of the EOS network has once again sparked discussion. EOS Block Producer (BP) EOS New Yor...


CME does not intend to launch physical settlement of bitcoin futures

Tim McCourt, head of global stock index at the Chicago Mercantile Exchange (CME), said in an interview with Markets M...