V God tells Serenity design principles to show you the uniqueness behind this magnificent project!
The design principle of the Eth1.0 chain is as follows:
Serenity design principles
(ii) reduce the risk of accidental safety problems, and
- Forbes: What challenges will cryptocurrency regulators face?
- Bitcoin Quotes | Short-term fears are released, not optimistic in the medium term
- Heavy: Coinbase spent $55 million to acquire Xapo hosting business, managing assets over $7 billion
(iii) Make it easier for protocol designers to convince users of the legitimacy of parameter selection.
Use this link [1] for background information. When we implement a certain level of functionality, we can't avoid some complexity. The priority of complexity is: the complexity of the Layer 2 protocol > the complexity of the client implementation > the complexity of the protocol specification.
▲ Long-term stability : Ideally, you should build a lower-level protocol so that you don't need to change the protocol for 10 years or more , and any innovation you need can be at a higher level (client implementation or Layer 2) Agreement).
▲ Adequacy : It should be possible to build as many application categories as possible on the agreement.
▲ Defensive defensiveness : The protocol should be able to continue to operate with a variety of possible security assumptions, such as assumptions about network latency, number of failures, and user motivation.
▲ Light client verifiability : Verifying O© data (ideally only verifying the beacon chain) given some security assumptions (such as network latency, attacker budget bounds, only 1/n or a few honest certifiers) The client should be able to obtain an indirect guarantee that all data for the entire system is available and valid, even in the case of 51% attacks (note: this is one of the "defensive defensive" aspects).
The tradeoff between Layer1 and Layer2
Support for Layer 2's arguments :
- Reduce the complexity of the consensus layer (see "Simplification" mentioned above);
- Reduce the need to modify the protocol layer (see "Long-term stability" mentioned above):
— reduce the risk of not reaching a consensus;
- Have more flexibility and the ability to implement new ideas over time
Support the arguments of Layer1 :
- Reduce the risk of stagnation due to the lack of a mechanism that forces everyone to upgrade to a new agreement (ie hard fork);
- May reduce the complexity of the entire system;
- If Layer1 is not powerful enough, it is impossible to build a Layer 2 system with the required performance on Layer1 (see "Adequacy" mentioned above)
Ethereum 2.0's design is largely committed to maintaining a balance between Layer1 and Layer2. This includes (i) similar Turing-complete code execution with rich state , (ii) scalability in data validity and computation , and (iii) faster block validation time , which is sufficient for protocol implementation Sexuality] is very necessary because:
- If you don't implement (i), you can't have a robust trust model to build a Layer 2 application.
- If (ii) is not implemented, the scalability will be limited to implementations such as stateful channels or technologies such as Plasma, which have challenges in terms of promotion and capital lock-in and/or large-scale exit.
- If (iii) is not implemented, then fast transactions cannot be made without the use of channel technology, and channel technology presents challenges in terms of promotion and capital lock-in and/or large-scale exit.
However, Ethereum 2.0 also intentionally left some other features to Layer 2 to achieve: (i) privacy , (ii) high-level programming language , (iii) scalable state storage , and (iv) signature scheme . These features are left to Layer 2 to achieve because these areas are areas of rapid innovation, many of the existing solutions have different characteristics, and it is inevitable to make trade-offs to get better, newer solutions. such as:
- Privacy : Ring Signature + confidential values vs. ZK-SNARKs and ZK-STARKs, Rollup vs. ZEXE and more.
- High-level programming languages : declarative programming vs. imperative programming, syntax, formal verification features, type systems, protective features (such as prohibiting the use of non-pure functions in arithmetic expressions), locally supported privacy features, etc.
- Scalable state storage : account model vs. UTXO (unused transaction output) model, different rent plans, raw Merkle branch witnesses vs. SNARK/STARK compression vs. RSA accumulator, sparse Merkel tree vs. AVL tree vs. usage-based imbalances and so on (in addition to the different schemes for verifying state transitions).
- Signature scheme : Schnorr signature, BLS signature, Lamport signature, etc.
Why use proof of equity?
- https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ
- https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51
Why use Casper?
- A probabilistic algorithm inspired by Nakamoto Satoshi (such as Peercoin (point coin), NXT (future currency), Ouroboros (Cardano's consensus algorithm) and other practical algorithms)
- Proof of interest algorithms inspired by PBFT (Practical Byzantine Fault Tolerance) (eg Tendermint, Casper FFG, Hotstuff, etc.)
- CBC Casper (for explanations see [4] and [5])
There is a problem in the last two camps, namely whether to use and how to use margin deposits (quality deposits) and Slashing (quality deposits). These three proof of rights mechanisms are superior to the proof of work (PoW), but we want to defend the method used by Ethereum 2.0 in this article.
Quality deposit penalty (Slashing)
- Increase attack costs : What we want to ensure is that any 51% attack on the PoS chain will cause the attacker to consume a very large fee (such as tens of millions of dollars in cryptocurrency), and the system can quickly get from any Recover in the attack. This makes the attack/defense calculus very unfavorable for the attacker and may actually make the attack counterproductive.
- Overcoming the dilemma of verifiers : The most realistic and straightforward way for a node to deviate from "honest" behavior is to neglect its duties (such as the verifier should be involved in verification, but not participating in verification; or signing when not signing, etc.) . The dilemma brought by the verifier is detailed in [6]. Bitcoin SPV mining [7] is an example of this happening, which will lead to very serious consequences. Verifying penalties for non-honest verifiers can help alleviate these dilemmas .
A more subtle example of point 2 above is that in July 2019 a verifier on the Cosmos chain was fined for signing two conflicting blocks [8]. The investigation of this event shows that the verifier runs both a primary node and a backup node. The purpose of the verifier is to ensure that when one of the nodes goes offline, the verifier remains. A reward can be obtained, and the two nodes happen to start at the same time, causing the two nodes to sign two conflicting blocks.
If both the primary node and the backup node become the same, the attacker can partition the blockchain network and submit all the certifier's primary and backup nodes to different blocks, resulting in two conflicting zones. The block is finalized. The slashing penalty helps to suppress this operation to a large extent, reducing the risk of this happening.
Consensus algorithm selection
It is important to note that certainty requires that most of the verifiers be online , and this is already a requirement in the Sharding mechanism , that is, the Sharding mechanism requires 2/3 verifiers in each committee of random verifiers. The crosslink is signed so that the crosslink will be accepted by the beacon link .
We chose Casper FFG only because it is the simplest algorithm that the protocol can use to achieve determinism. Currently we are actively exploring the switch to CBC Casper in Phase 3 of Ethereum 2.0 .
Sharding, or why do we hate super nodes?
We do not choose this way of using super nodes, the main reasons are as follows:
- Verifying pool centralization risk : In a super-node-based system, running a node would consume a high fixed cost, which limits the number of users that can participate. While some would argue that “in most PoW and PoS cryptocurrencies, the consensus is controlled by 5-20 pools (mine pools or verification pools), and these pools can run nodes well.” But this view The problem is that it ignores the risk of centralization pressures that exist , even among the pools with strong financial resources. If the fixed cost of running the verifier is very high relative to the return, then a larger verification pool can provide a lower commission than a small verification pool, which may cause those small verification pools to be squeezed out and feel The pressure to merge . In a sharding system, certifiers who pledge more ETHs will need to verify more transactions, so the cost is not fixed .
- Cloud Centralization Risk : In a super-node-based system, it is not feasible for users to staking at home, so most staking is more likely to occur in a cloud computing environment. This will create a single point of failure .
- Reduced censorship : Without high computational + bandwidth requirements, users are unlikely to participate in consensus, making it easier to detect and review certifiers.
- Scalability : In a super-node-based system, as the transaction throughput increases, the risks mentioned above increase accordingly, and the fragmentation system can more easily handle the increase in transaction volume .
These centralization risks are also why we do not attempt to achieve Ethereum's ultra-low latency (less than 1 second), but rather choose (relatively) conservative delay times. In the Ethereum 2.0 system, it is possible to use as little or as much ETH as possible and to use as little or as much computing power as possible (although you need to be consistent in terms of ETH and computing power, ie you cannot The pledge of a large amount of ETH with only a small amount of computing power, and vice versa), and the fixed cost is minimized, although this cost increases as the number of ETHs you pledge increases ( once you pledge The number of ETHs exceeds 32,768 ETH, so most of the time you will be verifying all the fragment chains (note: a total of 1024 fragmented chains, each certifier identity needs to pledge 32 ETH, so 1024*32=32,768)).
Security model
A common and more stringent security model is the uncoordinated rational majority , in which participants will act according to their own interests, but the number of participants who cooperate with each other will not exceed a certain proportion (in simple In the PoW chain, this proportion is 23.2% [11]).
Another more rigorous security model is the one that deals with the worst case scenario. When a single participant controls more than 50% of the power or quality deposit, the problem becomes :
(1) In this case, can we guarantee that the verifier will pay a very high cost when trying to destroy the entire chain?
(2) What guarantees can we maintain unconditionally?
In the PoS chain, slashing (the penalty for the quality deposit) can provide the first question above, that is, when the attacker tries to destroy the entire chain, it will require a very high cost. In a blockchain where no shards exist (such as the current Ethereum 1.0 chain), each node that verifies all blocks will fulfill the second problem above by providing the following two guarantees: (i) the longest The chain is valid, and (ii) the longest chain is also available (by this link [12] to see the rationale for the importance of "data availability").
In Ethereum 2.0, we implement defensive defense by means of sharding , by combining randomly selected certifier committees, based on which most certifiers will maintain an honest security model for effectiveness and availability. Guarantee, and prevent lazy verifiers by proof of custody (ie, if the verifier is "lazy" not participating in the verification will face punishment), through fraud proofs and data availability proofs [9] Detect invalid and unavailable chains without having to download and verify all data. This will allow the client to reject invalid and unavailable chains even when this chain is supported by most PoS certifiers.
The client can detect the reviewability of the transaction in a way that preserves consensus (see [13]), but the research in this area has not been integrated into the Ethereum roadmap.
The expected security features are shown in the following table:
Incentives set by Casper
- a reward for the “certification” being included in the top block;
- Reward for “certification” to clarify the correct epoch checkpoint (note: the last slot in each epoch period is called the checkpoint), and the slot is the time required to generate a block for the protocol ( 6 seconds));
- The reward for the “certification” that clarifies the correct chain head (top block);
- The reward obtained by the “proof” being quickly included in the chain (if the “proof” is included in the chain after 1 slot, then the verifier will receive the full reward; if it is included after n slots On the chain, the reward won will be 1/n of the total reward;
- The reward for the "success" that clarifies the correct segmentation.
In each case, the actual reward is calculated as follows. If B is the basic reward and P is part of the verifier performing the required "proof" operation, then the reward obtained by any verifier performing the required action will be B*P, and any action that would have been performed but A certifier who has not done so will be penalized by -B. The goal of this "collective reward" mechanism is to "if someone performs better, then everyone performs better", thereby limiting the vandalism. (See this article [13] for a description of the vandalism and why it is important to limit these factors)
It should be noted that the fourth point above is an exception; this reward depends on the delay in the adoption of the “proof”, not on the verifier's behavior, and the risk of no punishment.
The basic reward B itself is calculated , where D1…Dn is the size of the verifier's quality deposit, and k is a constant. This is a compromise between two common modes: (i) setting a fixed reward rate, ie k*Di, (ii) setting a fixed total reward, ie .
The main argument against (i) is that this model brings two levels of uncertainty to the network: the total amount of currency issued is uncertain, and the total number of participating pledges is uncertain (because if the fixed reward rate is too low, That basically no one will get involved, which threatens the entire network; and if the fixed reward rate is too high, there will be too many people involved, making the circulation of the coins unexpectedly high).
The main argument against (ii) is that this model will make the network more vulnerable to “discouragement attacks”, as described in [13].
The basic rewards are used to compromise both approaches and avoid the worst outcomes of each approach.
The reward that the block proposer obtains after including the "certificate" in the block is 1/8 of the basic reward. The purpose is to encourage the block proposer to monitor the information as much as possible and accept as much information as possible.
However, if P is less than 2/3 (ie, the number of online verifiers is less than 2/3 of the total), then the off-line certifier will be penalized as "inactivity leak" .
- Offline certifiers will be subject to more severe penalties because the certifier's offline will actually prevent the block from being finalized;
- The goal of serving “anti-correlation penalties” (explained further below)
- Make sure that if more than 1/3 of the certifiers are offline at the same time, the number of certifiers that will eventually go live will be restored to 2/3 of the total, as offline certifiers' declining quality deposits will cause them to be evicted from the certifier.
Based on the current parameterization, if the block stops the process that is finalized, the verifier will lose 1% of the quality deposit after 2.6 days, lose 10% of the quality deposit after 8.4 days, and lose after 21 days. 50% quality deposit .
This means that if 50% of the certifiers are offline, the block will be restarted after 21 days , because after 21 days, all offline certifiers have lost 50% of the quality deposit (16 ETH), and If the certifier's quality deposit is less than 16 ETH, it will be evicted from the certifier.
- Only when a verifier fails to act with many other verifiers at the same time, the verifier's behavior will cause real damage to the network, so the punishment in this case will be more serious;
- This severely penalizes the actual attack, but imposes very minor penalties for a single independent mistake that may not be malicious;
- This ensures that small verifiers will take less risk than large verifiers (because under normal circumstances, only large verifiers will fail at the same time);
- This inhibits everyone from joining the largest verification pool.
BLS signature
We use the signature of the certifier's mortgage message as proof of ownership, which clarifies the signed key and other important information, such as the withdrawal key.
Randomly selected verifier
In the future we plan to use VDF (verifiable delay function) to further increase the robustness of random seed defenses.
Since shuffle is a permutation, each certifier is designated as a member of a long-term committee during each epoch;
Since there will be 1024 segmentation chains in the Ethereum 2.0 system, this means that for each segmentation chain to be cross-linked during each epoch, we will need 131072 verifiers (note: 1024*128=131072) ), or, in the Ethereum 2.0 system, about 4.4 million ETHs need to be pledged (in fact, if the ETH of the pledge is less than this number, the number of cross-links of the slice chain will be less). And if you increase the minimum pledge limit (for example, to 1024 ETH, 32 ETH is currently determined), that means we won't be able to get enough certifiers to achieve on each shard chain during each epoch. Crosslinking unless all ETHs are plucked in.
After each epoch (64 blocks, approximately 6.4 minutes), the beacon chain reorganizes a fragmentation committee (ie, scrambles the verifier) for each fragmentation chain . The quick scrambling of the verifier is to ensure that if an attacker wants to attack a fragment chain, the attacker will need to quickly destroy (control) the fragmentation committee.
In order to further maintain the stability of the network, not all certifiers will rotate from the long-term committee of the n- time period to the long-term committee of the n+1 time period; instead, the rotation of each verifier will be delayed until the next A random time point of the time period is rotated again.
LMD GHOST fork selection rule
(1) Sign the fragment block by the committee, and put all the fragment blocks directly into the beacon chain;
(2) There is no beacon chain, but all the fragment chains are connected by some structure.
The reason why the above structure (1) is discarded is that it is preferable to set a 6 second slot for the slice chain block, but 1024 crosslinks on the beacon chain every 6 seconds will result in a letter. The standard chain is subjected to very high loads.
The reason for the abandonment of the structure in appeal (2) is that the central radiating beacon chain structure is easier to implement and understand than any complex structure.
When each epoch is turned on, each tiling block contains a pointer to its parent block and a pointer to the beacon block. This semi-tight coupling between the beacon chain and the slice chain is to (i) ensure that the slice chain knows about its long-term committee (because this information is generated by the beacon chain) and (ii) enables verification The slice chain block is called a feasible way to determine which chain is the standard beacon chain.
The fragmentation chain state (reward, penalty, history accumulator) is deliberately designed to be smaller than the block size, in order to ensure that if the fraud proves necessary, the slice chain state can be completely placed in the beacon chain (although this may only The limit will be relaxed in phase 2. In phase 2, the state of each individual execution environment will be limited by this size, but all states will be merged very large, so the fraud certificate will require Merkel proof).
- Make the beacon chain know which is the normalized fragment chain block;
- Create a simple byte array that can be verified for different methods (hosting proof, data availability proof) and ensure that the fragmentation block can be fully restored with multiple cross-links.
- Create a simple byte array to evaluate the fraud proof.
Certifier life cycle
- a public key corresponding to the private key used to sign the message;
- Withdrawal credentials (ie public key hashes, public key hashes will be used to withdraw funds when the verifier completes the verification)
- Quality deposit
These values are all signed by the signature key. Separate the signing key from the withdraw key to make the more secure risk of the withdrawal key safer (the withdrawal key is offline, not shared with any pledge pool, etc.) ), and the signature key is used to sign the message during each epoch.
All pledges of Merkle root are kept in the deposit contract. Once the verifier's pledge of Merkel is included in the Eth2.0 chain (via the Eth1.0 data voting mechanism), the Eth2.0 block proposer can submit a Merkel certificate for the pledge and initiate pledge process.
- This ensures that if the verifier has misconduct, there will be time to capture the misconduct and slashing the verifier;
- This gives the system time to issue the shard reward for the last period of time to the verifier;
- This provides time to challenge the hosting certification.
If the verifier is fined, the withdrawal time will be further delayed by approximately 36 days . This is a further penalty for the verifier (and forcing them to hold ETH; this is a penalty for those who want to support Ethereum but just accidentally make mistakes, which makes those who want to destroy the Ethereum blockchain The verifier will be penalized more severely, and time is reserved for the system to calculate the number of other verifiers who were also fined during this period.
During phase 0, the verifier who wants to "withdraw" is actually unable to make a withdrawal ; at a later stage, the funds withdrawn by the verifier will be transferred to an execution environment.
Bifurcation mechanism
If any user doesn't want to join a fork, just keep on staying on the chain that doesn't change the fork ID in the fork slot. Both chains can continue to exist, and the verifier can freely verify the two chains without penalty.
Remarks: The translation has been deleted.
Original link:
https://notes.ethereum.org/s/rkhCgQteN
Source: Unitimes
Author | Vitalik Buterin
Compile | Jhonny
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- One-day tour below Bitcoin $10,000, short-term market standing
- STO pioneer, the US "blockchain concept stock" Overstock brilliant and difficult to continue
- Can I withdraw the chain transaction only by paying 0.5 yuan? This brings endless trouble to DApp.
- Review of the plate rotation in the first half of 2019: IEO ignited the first fire, mode coin madness
- The plunging is very flustered, does the Bitcoin hedging property really disappear?
- Market analysis on August 15th: the market is on the left, the decision is right again
- Opinion: Is the long tail market the future of DEX?