Vitalik: The dawn of the hybrid Layer 2 protocol

Note: The original author is Eitafang founder Vitalik Buterin, the following is the translation content.

Special thanks to the review and feedback provided by the Plasma Group team.

At present, the layer 2 expansion method (essentially referring to Plasma and state channels ) is moving from the theoretical stage to the practical stage, but at the same time, the inherent challenges of these expansion technologies are becoming more and more easy to see, and therefore they There is still a long way to go to become a mature Ethereum expansion technology. Ethereum is largely successful because its development experience is very simple: you write a program, publish it, and anyone can interact with it. On the other hand, designing a state channel or Plasma application relies on a lot of explicit reasoning about the complexity of the stimulus and application-specific development. For specific use cases, such as the same two-party repeat payment and two-player gameplay (as successfully implemented in Celer), the state channel is suitable, but for a wider range of applications, this proves to be challenging. Plasma (especially Plasma Cash) works well in terms of payment, but it also faces challenges in the face of more applications: even if a decentralized exchange is implemented, the client needs to store more history. Data, it seems to be very difficult to apply Ethereum smart contracts on Plasma.

But at the same time, a forgotten " half-layer-2 " protocol category has re-emerged – this category has less benefit in terms of scalability, but it has the benefit of being easier to generalize and more beneficial to the security model. In a blog post that has been forgotten for a long time, I introduced the concept of "shadow chain" , which is a structure in which block data is published on the chain, but by default the block is not verified. . Instead, the block is temporarily accepted and can only be finalized after a period of time (say 2 weeks). During the two weeks, a temporarily accepted block may be challenged; the only way to verify the block, if the block proves to be invalid, then the chain where the block is located will be restored and the original publisher's Deposits will be punished. The contract does not track the full state of the system. It only tracks the state root. The user can calculate the state by processing the data submitted to the chain from start to finish. A recent ZK Rollup proposal , by using ZK-SNARKs to verify the validity of the block, does the same thing without a challenge period.

P1

Profile of the ZK Rollup package ( package ) posted on the chain: Hundreds of "internal transactions" affecting the ZK Rollup system state (ie account balance) are compressed into a package, each internal transaction containing approximately 10 words Section, used to specify the state transition, plus a SNARK of about 100-300 bytes (proven that the conversion is valid).

In both cases, the backbone is used to verify data availability, but does not (directly) verify block validity or perform any significant calculations unless a challenge is raised. Therefore, this technology does not have amazing scalability benefits, because the data overhead on the chain will eventually become a bottleneck, but it is still a very important technology. Data is cheaper than calculations, and there are many ways to compress transaction data very significantly, especially since most of the data in a transaction is signature data, and many signatures can be compressed into a single signature through multiple forms of aggregation. By compressing each transaction to just 10 bytes, zk rollup promises 500 transactions per second, which is 30 times higher than the throughput of the Ethereum chain itself. The signature does not need to be included because the validity is verified by zero knowledge. With BLS aggregated signatures, similar throughput can be achieved in the shadow chain (recently referred to as " optimistic rollup " to highlight its similarity to ZK Rollup). And the upcoming Ethereum Istanbul hard fork will reduce the gas cost of data from 68 units per byte to 16 units per byte, thereby increasing the throughput of these technologies by a factor of four (ie TPS will exceed 2000). ).

——————————————————————————————————————————————

So, what are the benefits of on-chain technology (such as zk/optimistic rollup ) compared to data-chain technologies (such as plasma )? First, the former does not require a semi-trusted operator. In ZK Rollup, because the validity is verified by the cryptographic proof, the package submitter cannot achieve malicious damage (depending on the settings, the malicious submitter may cause the system to stop for a few seconds, but this is the biggest achievable hazard). In optimistic rollup, a malicious submitter can post bad blocks, but the next committer will challenge the block immediately before publishing its own block. In ZK rollup and optimistic rollup, enough data is posted on the chain, allowing anyone to calculate the complete internal state, all the submitted deltas are processed in order, and there is no "data withholding attack" that can take away this attribute. Therefore, becoming an operator may be completely privileged, all that is needed is a margin for anti-spam purposes (such as 10eth).

Second, the optimistic rollup is particularly easy to generalize; the state transition function in the optimistic rollup system can actually be any function that can be computed within the gas limit of a single block (including the Merkle branch that provides the state portion needed to verify the transition period). Although in practice, it is very difficult to make ZK SNARKs out of general-purpose computing (such as EVM execution) (at least for now), ZK Rollup can theoretically be summarized in the same way. Third, optimistic rollup makes it easier to build clients because it requires less Layer 2 network infrastructure and can do more by scanning the blockchain.

But where do these advantages come from? The answer lies in a high-level technical issue called data availability issues (see comments , videos ). Basically, there are two methods of spoofing in the second tier system. The first is to publish invalid data to the blockchain. The second method is to not publish the data at all (for example, in Plasma, the root hash of the new Plasma block is posted to the main chain, but the area is not disclosed to anyone. The content of the block). Published but invalid data is easy to handle, because once the data is published on the chain, there are multiple ways to clearly determine if it is valid, while invalid commits are explicitly invalid, so the submitter may be severely penalized. On the other hand, data that is not available is more difficult to handle because even if unavailability is detected when challenged, it is impossible to reliably determine who the unpublished data is, especially if the data is retained by default. Displayed on demand only when certain authentication mechanisms attempt to verify their availability. This is illustrated in Fisherman's dilemma , which shows how a challenge-response game can't distinguish between malicious committers and malicious challengers:

P2

Fisherman's dilemma, if you only look at the given specific data at time T3, then you don't know if you are in Case 1 or Case 2, so who is wrong?

Plasma and state channels, by pushing the problem to the user, are all around solving the "fisherman's dilemma" problem: if you are a user, you decide which other user you are interacting with (state channel counterparty, or operation of the Plasma chain) If you don't post the data they should post, then you are responsible for exiting and moving to another counterparty/operator. In fact, as a user, you have all the previous data, as well as data about all the transactions you signed, which allows you to prove to the chain which assets you have in the layer-2 protocol to safely take them out of the system. You have proven that a (previously agreed) existence of an operation that gave you assets, no one else can prove the existence of an operation that you approved to send the asset to someone, so you get the asset.

This technique is elegant, however, it relies on a key assumption: each state object has a logical "owner" and cannot change the state of the object without the owner's consent. This is good for UTXO-based payments (but not for account-based payments, because in this case, you can edit the balance of others up without the consent of others; this is the account-based Plasma The reason for the difficulty), it can even be used to decentralize the exchange, but this "ownership" attribute is not enough. Some applications (such as uniswap ) do not have a natural owner, and even in those applications that have owners, there are often multiple people who can legally edit objects. Moreover, without the possibility of a denial of service (DoS) attack, no third party can be allowed to exit the asset, precisely because it is impossible to prove whether the publisher or submitter is at fault.

In addition, Plasma and state channels also have their own unique problems. The status channel does not allow users who are not part of the channel to conduct an out-of-chain transaction (reason: suppose there is a way to send $1 from the inside of the channel to any new user, then this technique can be used multiple times in parallel, sending $1 to Users with more funds in the system have already destroyed their security guarantees). Plasma requires users to store large amounts of historical data, which can become larger when different assets are intertwined (for example, when an asset is transferred on the condition that another asset is transferred, as if As happened in the decentralized exchange of the single-stage pay-to-sale mechanism.

Since the data-on-chain computation-off-chain layer 2 technology has no data availability issues, they do not have these weaknesses. ZK rollup and optimistic rollup are very careful to place enough data on the chain to allow the user to calculate the complete state of the Layer 2 system, ensuring that new participants can easily replace them if any participants disappear. The only problem with them is to verify the calculations without performing on-chain calculations, which is an easier problem to solve. Moreover, scalability has been significantly improved: in ZK Rollup, each transaction has approximately 10 bytes, and by using BLS to aggregate signatures, a similar level of scalability can be achieved in optimistic rollups. This is equivalent to theoretically supporting up to 500 transactions per second, and after hard forks in Istanbul, it can achieve more than 2000 TPS .

———————————————————————————————————————————————— ——

But what if you want to achieve more scalability? Then, there is a large middle ground between layer 2 on the data chain and the layer 2 protocol under the data chain. There are a number of hybrid methods that can provide you with some of the benefits of both. As a simple example, you can prevent the historical storage explosion of decentralized exchanges implemented on Plasma Cash by issuing a map that indicates which orders and which orders on the chain (less than 4 bytes per order) match:

P3

Left: If the Plasma Cash user has 1 coin, they need to store historical data. Middle: If a Plasma Cash user has a currency that uses atomic exchange to exchange with another currency, they need to store historical data. Right: If the order match is posted on the chain, the Plasma Cash user needs to store the historical data.

Even outside the decentralized exchange environment, the amount of historical data that users need to store in Plasma can be reduced by having the Plasma chain periodically publish some data for each user on the chain. We can also imagine a platform that works like Plasma in some states without a logical "owner" (like ZK rollup or optimistic rollup). Plasma developers have begun to work on these optimizations .

Therefore, for developers of layer 2 scalability solutions, this hybrid approach greatly increases the ease of development, versatility, and security of the development, and reduces the load on each user (for example, does not require users) Store historical data). The loss of efficiency in doing so is also exaggerated: even in a completely off-chain layer-2 structure, deposits, withdrawals, and movements of users on different counterparties and suppliers will be inevitable and will occur frequently, therefore, In any case, there will be a large amount of data per user chain. Hybrid routing opens the door to the rapid deployment of fully versatile Ethereum smart contracts in a half-layer-2 architecture.

See also:

  1. Introducing the OVM
  2. Blog post by Karl Floersch
  3. Related ideas by John Adler