Dry Goods | Revalidation Proof vs. Error Proof

Author: Avihu Levy & Uri Kolodny

Translation & Proofreading: Anzai C1int & A Jian

Source: Ethereum enthusiasts

-Photo courtesy of Artur Tumasjan from Unsplash-


In recent months, more and more attention has been paid to Optimistic Rollup (OR), an expansion framework based on Fault Proof. We at StarkWare use Validity Proofs (VP) because it is more secure than false proofs. In addition to security discussions, this article will list some of the additional advantages of VP over OR, while correcting common misunderstandings of validity certification, and finally introduce StarkEx, an extension engine based on STARK and proof of validity developed by StarkWare.

In particular, compared with OR, VP has the following advantages:

  • Fundamentally more secure
  • 1000 times higher capital efficiency when withdrawing
  • Stronger scalability (on-chain)
  • In terms of calculation, at least the same efficiency can be achieved


In the previous analysis (editor's note: at the end of the Chinese translation), we compared VP and OR. The former only executes a state transition if it has been strictly proven valid, while the latter allows arbitrary state transitions. (So ​​Optimistic), participants can submit false proofs for invalid state transitions. Our previous article focused on security analysis and clearly pointed out an attack method that can be implemented in the OR and can cause all funds in the OR to be stolen (other feasible attack methods are also discussed in the follow-up). The infrastructure solution on the blockchain must be robust enough to support the trillions of dollars of daily financial interaction load from the financial world. How are VP and OR competent? Since the cost of stealing funds in the OR has nothing to do with the benefits, once the system carries enough capital to make the system profitable, the rational participants will certainly try to make a profit by attacking. In contrast to OR, VP does not transition to any invalid state, so that no matter how much money is carried, it will not be stolen. For large-scale financial systems, VP is more robust and OR is more fragile.

You can also analyze the security of the system from the perspective of losing the data set. Compared with VP, the data in OR is more sensitive. Once the data is lost, the funds in the OR may be stolen-which is why the current design solutions focus on the Data Availability challenge on the chain. As for VP, due to the use of on-chain data (such as ZK-Rollup), funds are as secure as stored in Layer-1. As for the off-chain data part of VP, funds are frozen at most without being stolen.

Funding efficiency

One of the major pain points of liquidity in the digital currency world is the delay in withdrawing funds. In OR, the standard withdrawal window period is approximately 1 week-this is the valid window time (a security parameter) for submitting false proofs. In VP, the standard withdrawal window is about 10 minutes (which can be made shorter with additional software and hardware enhancements)-this is the time to generate a proof of validity for the last calculation result. Therefore, OR's standard withdrawal window time is 1000 times longer than VP (1 week / 10 minutes ~ 1000). The use of OR has to withstand such a 1000-fold inconvenience, which is not only a delay in time, but also a decrease in capital efficiency.

We previously described a trustless fast withdrawal mechanism: users who want to withdraw immediately need to give the liquidity provider an IOU for off-chain assets, that is, a signed conditional payment transaction, and the liquidity provider will This "piggy bank" advances the funds in the smart contract, and transfers the funds on the IOU to the withdrawal user on the chain. The entire set of operations takes about the same time as the transfer time of the blockchain network. The liquidity provider will periodically transfer the accumulated off-chain assets to the "piggy bank" on the chain.

Both VP and OR can apply fast withdrawal mechanisms. However, in the OR system, liquidity providers need to prepare 1000 times of funds in a "piggy bank" because the time window for waiting for them to receive off-chain assets is 1000 times longer. This 1000x ratio has nothing to do with the various assumptions in the "Piggy Bank" liquidity algorithm: whether it is based on the expected value of the withdrawal amount, or the withdrawal-deposit balance, or the peak liquidity demand, the average withdrawal amount, etc The amount of reserve required by OR is 1000 times that of VP.

However, sometimes quick withdrawals are simply not available. For non-homogeneous assets (or some non-mainstream homogeneous assets) cannot be used (or the cost of application is high):

  • Non-Homogeneous Token (NFT): As introduced by Vitalik earlier, if a valuable CryptoKitty named Mitzi exists off-chain, his owner cannot claim to receive another Mitzi on the chain because there are And this is the only CryptoKitty called Mitzi.
  • Hidden transactions: Zerocash-style commitments are also heterogeneous to some extent. In order to quickly refer to the funds in hidden transactions to the main chain (the main chain should also remain hidden), users must disclose the promised data to the liquidity provider to destroy the hidden nature.

In scenarios where this quick withdrawal mechanism cannot be applied, users can only choose to wait for the end of the standard withdrawal window, and VP is 1000 times faster than OR.

Scalability (on-chain)

In this section, we will compare different rollup systems. Because comparisons between similar things are meaningful, we only compare rollup systems that provide data availability on the chain, namely: OR vs. STARK ZK-Rollup (StR). Although we don't want to, all rollup systems that store data on the chain will linearly increase the amount of resources consumed as the number of rollup transactions increases. The data on the chain contains some status (such as transaction details) and witness data (such as digital signatures used to prove the parties involved in the transaction). The difference between OR and StR is that as the transaction volume increases, the former witness data grows linearly, while the latter replaces these witnesses with a proof, and the size of this proof will only increase at the logarithmic level of the polynomial. Focus on, for large enough and large batch transactions, StR's on-chain data fingerprint is much smaller than OR …

Starting from the details: In StR, the witness data can verify the inspection performed by the rollup operator, so only one witness (such as a zk-proof) is required for a batch of calculations, without attaching a Certificate. Even better, in modern zkp systems, the size of this proof is fixed (to be precise, as we mentioned earlier, it is a polynomial exponential level). Therefore, as the volume of transactions increases, the resource consumption allocated to each transaction head decreases. In OR, each transaction must be accompanied by a testimony to enable the verifier to verify the correctness of the transaction. Therefore, for large-volume transactions, there is no advantage of even reduction. more importantly. The witness in the OR is much larger than the transaction itself: for example, the OR witness needs to contain the signature 1 of all users, and the VP does not need to (prove that it can be verified that they have been verified off-chain). In pure payment, witnessing is 3 to 5 times larger than the amount of data paid; for complex application scenarios (such as hidden transactions), witnessing is usually more than 10 times larger than state data, and sometimes even more.

In general, OR obviously consumes more resources on the chain, and therefore it can reach the expansive ceiling faster than StR.

Generalized Expenditure (the more STARKs are used, the better)

People often compare the general computing expenses of VP and OR: that is, what additional work does two different systems need to do for a given off-chain computing task? In the following we will start around StarkWare's STARK, because this is the VP framework we currently apply.

OR: Since 100 verifiers supervise each other can basically guarantee the correctness of the entire calculation, when it comes to OR, the number of verifiers is in the hundreds. At the verification stage, each validator needs to perform a calculation task again, so the cost of doing general calculations in OR is 100 times that of the original task.

It is necessary to explain that the smaller or more pre-assigned sets of validators, the easier it is for validators to collude with each other, or to be bribed or attacked by outside parties.

STARK: Since the computational cost of the verification process is negligible, it requires only one entity-the prover-to do a lot of calculations. How insignificant is the computational cost of verification? Now we can even use a simple smartphone to verify a large number of calculation results, so we can ignore the calculation consumption of verification. It is often said that the prover's computational cost is 10,000 times that of the original task because the proof consumes a lot of calculations to generate. But in fact, the calculation cost required by StR is only 100 times, so the additional calculation cost is roughly equal to the OR. The reason why StR's calculation cost is only 100 times is because of the following reasons:

  • For arithmetic / geometry operations, we have reached less than 100 times the computational expense. The Pedersen hash function currently applied only adds 20 times more computational cost than the original operation: that is, using STARK to prove the correctness of a Pedersen hash value is only 20 times slower than directly calculating Pedersen.
  • For operations that are known to be computationally expensive like SHA-256, we are trying to replace those functions with STARK-friendly operations. We are currently funded by the Ethereum Foundation to conduct these studies, and in the first quarter of 2020, many crypto bulls will submit to us their suggestions for alternatives. It is estimated that the STARK-friendly hash function is only 100 times slower than the direct calculation of some efficient hash algorithms (such as SHA-256).

Finally, many people call OR because it can be used in general computing and will support EVM code. VP does not have this feature. We are optimistic about using STARK in general purpose computing.

Thanks to Dan Robinson, John Adler and Vitalik Buterin for their feedback on this article.

¹ BLS is often considered an efficient aggregate signature mechanism. We believe that BLS will not only apply to this use case, as it is the best way to integrate multiple signatures into a single message. In the use case of OR, many messages need to be signed by the sender / sender of the transaction; and the verification of the BLS signature takes 10ms / signature (one pair of operations per message), which is not only the burden of the verifier (off-chain), The main chain also needs to handle this consumption when discriminating fraud.