Vitalik Buterin: The dawn of the hybrid Layer 2 protocol

Written by: Vitalik Buterin

The Chinese version is compiled by the chain.

The current Ethereum Layer 2 expansion solution (mainly referring to Plasma and state channels) is moving from theory to practice. But at the same time, we are increasingly aware of the inherent challenges of these technologies, and there is still a certain distance from the mature Ethereum expansion technology.

To a large extent, Ethereum is a success, because the developer experience of Ethereum is very good: writing programs, publishing programs, and anyone can interact with them. On the other hand, designing a state channel or Plasma application relies on a lot of explicit reasoning about the complexity of the stimulus and application-specific use case development. State channels are suitable for specific use cases, such as multiple recurring payments between two traders or two gamers (successfully implemented in Celer) , but the more general use cases still seem to be difficult. Plasma (especially Plasma Cash) can be used to pay for scenarios, but there are some challenges to implementing a common use case: even deploying a decentralized exchange requires customers to store more historical data and deploy it on Plasma. The Ethereum-style generic smart contract also seems to be very difficult.

But at the same time, a forgotten "half Layer 2" protocol category reappears – a type that does not improve scalability but enables a more general and secure model.

In a blog post published in 2014, I introduced the concept of a "shadow chain", which is an architecture for publishing block data on a chain, but does not verify blocks by default. . Instead, the block will be temporarily accepted and then confirmed after a period of time (such as two weeks or so) . In the past two weeks, the temporarily accepted block may be challenged to verify the block through the challenge. If the block eventually proves to be invalid, then the chain in which the block is located will be restored, while the original publisher’s The deposit will be deducted as a penalty. The contract does not track the complete state of the system, it only tracks the state root, and the user can process the data submitted to the blockchain from start to finish to calculate the state.

A recent proposal called ZK Rollup, by using ZK-SNARKs to verify the validity of the block, although there is no challenge period, can achieve the same effect.

Above : A ZK Rollup package profile published on the chain . Hundreds of "internal transactions" affecting the ZK Rollup system state (ie account balance) are compressed into a single package containing approximately 10 bytes of each internal transaction and proof of the full validity of the relevant transfer process. (about 100-300 bytes size) SNARK, about 100-300 bytes.

In the above case, the main chain is used to verify data availability, but will not (directly) verify block validity or perform any significant calculations (unless a challenge is raised) . Therefore, this technology does not have any key advantages in capacity expansion, because the data load on the chain will eventually become a bottleneck, but it is undeniable that it is still a very important means. Data is cheaper than calculations, and there are many ways to significantly compress transaction data on the market—mainly because most of the data in a transaction is signed data, and many signatures can be compressed into a single signature through multiple forms of aggregation. ZK Rollup promises to process 500 transactions per second, which is 30 times more efficient than Ethereum itself, because it compresses each transaction to just 10 bytes, and the transaction validity is verified by zero-knowledge proof, so no Need to include a signature. On the other hand, using BLS aggregated signatures can achieve similar throughput in the "shadow chain" ( similar to the new architecture Optimistic Rollup released by the recent Ethereum Casper core developer Karl Floersch) . The upcoming Istanbul hard fork will reduce the gas cost of data from 68 units per byte to 16 units per byte, while also increasing throughput by four times (ie processing more than 2000 transactions per second).

So, what are the benefits of data on-chain techniques like ZK/Optimistic Rollup compared to the sub-chain expansion technology like Plasma?

First, ZK/Optimistic Rollup does not require a " semi-trusted" operator. In ZK Rollup, because the validity of the transaction is verified by cryptographic proofs, there is not much way for the submitter of the package to "make bad thoughts" (although, depending on the settings, malicious committers may cause the system Pause for a few seconds, but this is actually the biggest damage they can do to the system) . In Optimistic Rollup, a malicious submitter can publish a "bad block", but the next committer can challenge the block immediately before publishing its own block. In ZK Rollup and Optimistic Rollup, there will be enough data to be posted to the chain, allowing anyone to calculate the complete internal state, all submitted deltas must be processed in order, and there is no data detention attack to cancel this property. "." In other words, becoming an operator can be completely "no license", and the required margin (only 10 ETH) is only used for anti-spam purposes.

Second, the implementation of Optimistic Rollup is more generalized, and the state transition function in the Optimistic Rollup system can be used for more general calculations in the gas cap of a single block (including the Merkle branch that provides the state portion needed to verify the transition period) . In theory, ZK Rollup can also implement general-purpose calculations in the same way, but in practice, it is very difficult to use ZK SNARKs for general-purpose calculations (such as in EVM) , at least so far.

Again, Optimistic Rollup makes it easier to build clients because it requires less Layer 2 network infrastructure and can do a lot of work by scanning the blockchain.

But what are the advantages of ZK/Optimistic Rollup? In fact, this is a highly technical issue, the so-called data availability problem . (See the explain documentation at https://github.com/ethereum/research/wiki/A-note-on-> and explain the video: https://www.youtube.com/watch?v=OJT_fR7wexw )

Basically, there are two ways to "cheat" in the Layer 2 system: the first is to publish invalid data to the blockchain, and the second is to not publish data at all ( for example, to put a new block of Plasma in Plasma) Root hash is posted to the main chain, but does not reveal the block content to anyone) .

Publishing invalid data is actually easy to handle, because once the data is published on the chain, there are many ways to determine if it is valid, and the invalid data submitted is marked as invalid, and the submitter is severely punished. On the other hand, unavailable data is harder to deal with, because even if you encounter a challenge, you can't detect unavailability, and you can't accurately determine who the unpublished data is, especially if the data is retained by default. In the case of certain authentication mechanisms, they are displayed on demand only when they attempt to verify their availability. This is illustrated in Fisherman's dilemma , where a challenge-response game cannot distinguish between malicious committers and malicious challengers:

Above : Fisherman's dilemma. If you only start to observe a given specific data at T3, you can't know if you are in Case 1 or Case 2, so you don't know who is wrong.

By pushing the problem to the user, Plasma and the status channel effectively solve the "fisherman's dilemma" problem: if you are a user and decide to interact with another user (a counterpart in the status channel, or an operator in Plasma) ) , assuming the user has not sent you the data they should post, then you will exit and move to the next counterpart/operator. As a user, you have all the previous data and all the transaction data signed by you, a fact that allows you to prove your own assets in the Layer 2 protocol in the blockchain and safely bring those assets out of the system. In addition, you also need to have proof of the operation of the asset ( which you have previously agreed to) , and no one else can prove that you sent the relevant asset to someone else, then you get the asset. This technique is really elegant, but relies on a key assumption: each state object has a logical "owner" and the object state cannot be changed without the owner's consent.

Therefore, this technique is suitable for payment transactions based on unspent transaction output (UTXO) (but not for account model payment transactions, because in this case you can edit other people's account balances without the consent of others) , which is why the account model of Plasma is so difficult) , and can even be used for decentralized exchanges, but this "owner The attributes are not enough. Some applications ( such as Uniswap) do not have natural owners, and even in those applications that have owners, it is often the case that multiple people can legally edit related objects. Without the possibility of a denial of service (Do S) attack, there is no way to allow any third party to exit the asset because no one can prove that the publisher or submitter is at fault.

In addition, there are some other issues with Plasma and the state channel. For users who have not yet joined the status channel, offline transactions are not allowed. Plasma requires user users to store large amounts of historical data, which can become larger when different assets are intertwined (for example, an asset transfer condition is based on another asset transfer, as if it had a single segment ( Single-stage) occurs in the decentralized exchange of the buy-in mechanism.

Because there is no data availability problem under the calculation chain on the data chain (>Layer 2 technology, there is no such weakness. ZK Rollup and Optimistic Rollup will very carefully place enough data on the chain to allow the user to calculate the integrity of the Layer 2 system. State, ensuring that if any participant disappears, new participants can easily replace it. The only problem with Layer 2 technology is the verification calculation without chain calculations, but this is actually an easy problem to solve. It will bring more benefits: each transaction in ZK Rollup is about 10 bytes. By using BLS to aggregate signatures, Optimistic Rollup can reach a similar level. In theory, up to 500 transactions per second, while in Istanbul After the upgrade, the indicator number can reach 2000 times per second.

But what if you want to achieve higher scalability?

In fact, there is a large middle ground between Layer 2 in the data chain and Layer 2 protocol in the data chain, and many hybrid solutions make better use of the advantages of both. As a simple example, by publishing a (less than 4 bytes) order match mapping on the chain, you can prevent historical storage explosions on decentralized exchanges deployed on Plasma Cash:

Left: The historical data that the Plasma Cash user who owns a token needs to store;

In the picture above: When there is a token and the token is obtained through atomic exchange , the Plasma Cash user needs to store the historical data;

Above right: If the order combination is posted on the chain, the Plasma Cash user needs to store the historical data;

In addition to the decentralized exchange use case, in order to reduce the amount of historical data that users need to store in Plasma, the Plasma chain can periodically publish per-user data on the chain. We can also imagine a platform that works like Plasma if some states have a logical "owner" or work like ZK Rollup or Optimistic Rollup in some states without a logical "owner". Plasma developers have begun to work on these optimizations.

In fact, it is now necessary to provide a powerful use case for developers of Layer 2 expansion solutions, making them more willing to publish each user's data in the chain (at least at some point) : this will greatly enhance the development convenience Sex, versatility, and security, and reduce the load on each user (for example, does not require users to store historical data) . Doing so will also increase efficiency. Even in the Layer 2 architecture under the complete chain, users will inevitably and frequently transfer, withdraw and transfer between different counterparties and providers. Therefore, a large number of User data. Clearly, hybrid routing opens the door to a fully versatile Ethereum-style smart contract that quickly deploys within the quasi-Layer 2 architecture.

Written by Vitalik Buterin

Compile: Chain smell

Source: Chain smell