V God tells Serenity design principles to show you the uniqueness behind this magnificent project!

The design principle of the Eth1.0 chain is as follows:


Serenity design principles

▲ Simplicity : Especially since the encryption economy PoS and quadratic sharding are inherently complex, the protocol should pursue the utmost simplicity in its decision making . This is very important because it will: (i) minimize development costs,

(ii) reduce the risk of accidental safety problems, and

(iii) Make it easier for protocol designers to convince users of the legitimacy of parameter selection.

Use this link [1] for background information. When we implement a certain level of functionality, we can't avoid some complexity. The priority of complexity is: the complexity of the Layer 2 protocol > the complexity of the client implementation > the complexity of the protocol specification.

▲ Long-term stability : Ideally, you should build a lower-level protocol so that you don't need to change the protocol for 10 years or more , and any innovation you need can be at a higher level (client implementation or Layer 2) Agreement).

▲ Adequacy : It should be possible to build as many application categories as possible on the agreement.

▲ Defensive defensiveness : The protocol should be able to continue to operate with a variety of possible security assumptions, such as assumptions about network latency, number of failures, and user motivation.

▲ Light client verifiability : Verifying O© data (ideally only verifying the beacon chain) given some security assumptions (such as network latency, attacker budget bounds, only 1/n or a few honest certifiers) The client should be able to obtain an indirect guarantee that all data for the entire system is available and valid, even in the case of 51% attacks (note: this is one of the "defensive defensive" aspects).

The tradeoff between Layer1 and Layer2

Readers can read my previous two articles: Layer 1 Should Be Innovative in the Short Term but Less in the Long Term [2] and Sidechains vs Plasma vs Sharding [3]. In any blockchain protocol design, more features are introduced in Layer1 (ie, the consensus layer), and a simpler Layer1 protocol is built and allowed to be built on Layer2 (ie, the application layer). There are compromises:

Support for Layer 2's arguments :

  • Reduce the complexity of the consensus layer (see "Simplification" mentioned above);
  • Reduce the need to modify the protocol layer (see "Long-term stability" mentioned above):

— reduce the risk of not reaching a consensus;

— Reduce the workload of agreement governance and political risk
  • Have more flexibility and the ability to implement new ideas over time

Support the arguments of Layer1 :

  • Reduce the risk of stagnation due to the lack of a mechanism that forces everyone to upgrade to a new agreement (ie hard fork);
  • May reduce the complexity of the entire system;
  • If Layer1 is not powerful enough, it is impossible to build a Layer 2 system with the required performance on Layer1 (see "Adequacy" mentioned above)

Ethereum 2.0's design is largely committed to maintaining a balance between Layer1 and Layer2. This includes (i) similar Turing-complete code execution with rich state , (ii) scalability in data validity and computation , and (iii) faster block validation time , which is sufficient for protocol implementation Sexuality] is very necessary because:

  • If you don't implement (i), you can't have a robust trust model to build a Layer 2 application.
  • If (ii) is not implemented, the scalability will be limited to implementations such as stateful channels or technologies such as Plasma, which have challenges in terms of promotion and capital lock-in and/or large-scale exit.
  • If (iii) is not implemented, then fast transactions cannot be made without the use of channel technology, and channel technology presents challenges in terms of promotion and capital lock-in and/or large-scale exit.

However, Ethereum 2.0 also intentionally left some other features to Layer 2 to achieve: (i) privacy , (ii) high-level programming language , (iii) scalable state storage , and (iv) signature scheme . These features are left to Layer 2 to achieve because these areas are areas of rapid innovation, many of the existing solutions have different characteristics, and it is inevitable to make trade-offs to get better, newer solutions. such as:

  • Privacy : Ring Signature + confidential values ​​vs. ZK-SNARKs and ZK-STARKs, Rollup vs. ZEXE and more.
  • High-level programming languages : declarative programming vs. imperative programming, syntax, formal verification features, type systems, protective features (such as prohibiting the use of non-pure functions in arithmetic expressions), locally supported privacy features, etc.
  • Scalable state storage : account model vs. UTXO (unused transaction output) model, different rent plans, raw Merkle branch witnesses vs. SNARK/STARK compression vs. RSA accumulator, sparse Merkel tree vs. AVL tree vs. usage-based imbalances and so on (in addition to the different schemes for verifying state transitions).
  • Signature scheme : Schnorr signature, BLS signature, Lamport signature, etc.

Why use proof of equity?


  • https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ
  • https://medium.com/@VitalikButerin/a-proof-of-stake-design-philosophy-506585978d51

Why use Casper?

The current Proof of Entitlement (PoS) consensus algorithm has three main camps:

  • A probabilistic algorithm inspired by Nakamoto Satoshi (such as Peercoin (point coin), NXT (future currency), Ouroboros (Cardano's consensus algorithm) and other practical algorithms)
  • Proof of interest algorithms inspired by PBFT (Practical Byzantine Fault Tolerance) (eg Tendermint, Casper FFG, Hotstuff, etc.)
  • CBC Casper (for explanations see [4] and [5])

There is a problem in the last two camps, namely whether to use and how to use margin deposits (quality deposits) and Slashing (quality deposits). These three proof of rights mechanisms are superior to the proof of work (PoW), but we want to defend the method used by Ethereum 2.0 in this article.

Quality deposit penalty (Slashing)

Ethereum 2.0 uses a Slashing mechanism, that is, a verifier who is detected to be misbehaving will be punished. The slightest penalty is to destroy ~1% of the quality deposit. The most serious penalty is to destroy the verifier. All quality deposits . We defend the use of the Slashing mechanism in several ways:

  1. Increase attack costs : What we want to ensure is that any 51% attack on the PoS chain will cause the attacker to consume a very large fee (such as tens of millions of dollars in cryptocurrency), and the system can quickly get from any Recover in the attack. This makes the attack/defense calculus very unfavorable for the attacker and may actually make the attack counterproductive.
  2. Overcoming the dilemma of verifiers : The most realistic and straightforward way for a node to deviate from "honest" behavior is to neglect its duties (such as the verifier should be involved in verification, but not participating in verification; or signing when not signing, etc.) . The dilemma brought by the verifier is detailed in [6]. Bitcoin SPV mining [7] is an example of this happening, which will lead to very serious consequences. Verifying penalties for non-honest verifiers can help alleviate these dilemmas .

A more subtle example of point 2 above is that in July 2019 a verifier on the Cosmos chain was fined for signing two conflicting blocks [8]. The investigation of this event shows that the verifier runs both a primary node and a backup node. The purpose of the verifier is to ensure that when one of the nodes goes offline, the verifier remains. A reward can be obtained, and the two nodes happen to start at the same time, causing the two nodes to sign two conflicting blocks.

If both the primary node and the backup node become the same, the attacker can partition the blockchain network and submit all the certifier's primary and backup nodes to different blocks, resulting in two conflicting zones. The block is finalized. The slashing penalty helps to suppress this operation to a large extent, reducing the risk of this happening.

Consensus algorithm selection

Of the three camps of the PoS consensus algorithm mentioned above, only the latter two camps (ie, the PBFT-inspired consensus algorithm and CBC Casper) have the concept of finality, which is a certain area. Blocks are confirmed in this way: only when a large percentage of verifiers (PBFT-inspired algorithms require at least 1/3 of the verifiers, the CBC Casper algorithm requires at least 1/4 of the verifiers) behave improperly and therefore The block will be reversed when it is fined ; the first camp, the (longest chain rule) consensus algorithm inspired by Nakamoto, cannot achieve this sense of finality.

It is important to note that certainty requires that most of the verifiers be online , and this is already a requirement in the Sharding mechanism , that is, the Sharding mechanism requires 2/3 verifiers in each committee of random verifiers. The crosslink is signed so that the crosslink will be accepted by the beacon link .

We chose Casper FFG only because it is the simplest algorithm that the protocol can use to achieve determinism. Currently we are actively exploring the switch to CBC Casper in Phase 3 of Ethereum 2.0 .

Sharding, or why do we hate super nodes?

For the Layer 1 extension, a major alternative to the sharding approach is to use supernodes, which require each consensus node to have a powerful server so that the node can handle each pen individually. transaction. SuperNode-based extensions are convenient because they are easy to implement: this is the same as many blockchains today, except that more software design work is required to build in a more parallel way.

We do not choose this way of using super nodes, the main reasons are as follows:

  • Verifying pool centralization risk : In a super-node-based system, running a node would consume a high fixed cost, which limits the number of users that can participate. While some would argue that “in most PoW and PoS cryptocurrencies, the consensus is controlled by 5-20 pools (mine pools or verification pools), and these pools can run nodes well.” But this view The problem is that it ignores the risk of centralization pressures that exist , even among the pools with strong financial resources. If the fixed cost of running the verifier is very high relative to the return, then a larger verification pool can provide a lower commission than a small verification pool, which may cause those small verification pools to be squeezed out and feel The pressure to merge . In a sharding system, certifiers who pledge more ETHs will need to verify more transactions, so the cost is not fixed .
  • Cloud Centralization Risk : In a super-node-based system, it is not feasible for users to staking at home, so most staking is more likely to occur in a cloud computing environment. This will create a single point of failure .
  • Reduced censorship : Without high computational + bandwidth requirements, users are unlikely to participate in consensus, making it easier to detect and review certifiers.
  • Scalability : In a super-node-based system, as the transaction throughput increases, the risks mentioned above increase accordingly, and the fragmentation system can more easily handle the increase in transaction volume .

These centralization risks are also why we do not attempt to achieve Ethereum's ultra-low latency (less than 1 second), but rather choose (relatively) conservative delay times. In the Ethereum 2.0 system, it is possible to use as little or as much ETH as possible and to use as little or as much computing power as possible (although you need to be consistent in terms of ETH and computing power, ie you cannot The pledge of a large amount of ETH with only a small amount of computing power, and vice versa), and the fixed cost is minimized, although this cost increases as the number of ETHs you pledge increases ( once you pledge The number of ETHs exceeds 32,768 ETH, so most of the time you will be verifying all the fragment chains (note: a total of 1024 fragmented chains, each certifier identity needs to pledge 32 ETH, so 1024*32=32,768)).

Security model

It is generally believed that the security of the blockchain relies on the assumption that “most participants are honest”, ie ≥50% of the participants will honestly follow the established agreement and will abandon the chance of rebellion for personal benefit. In fact, (i) the assumption that “most participants are honest” is impractical because participants may be “lazy” and sign blocks without verifying the block (see verifier) The dilemma [9] and the bifurcation caused by Bitcoin SPV mining [10]) are all very common traitors; but fortunately, (ii) blockchains usually pass some security models (and It is not assumed that most participants will remain honest) to maintain their own security features .

A common and more stringent security model is the uncoordinated rational majority , in which participants will act according to their own interests, but the number of participants who cooperate with each other will not exceed a certain proportion (in simple In the PoW chain, this proportion is 23.2% [11]).

Another more rigorous security model is the one that deals with the worst case scenario. When a single participant controls more than 50% of the power or quality deposit, the problem becomes :

(1) In this case, can we guarantee that the verifier will pay a very high cost when trying to destroy the entire chain?

(2) What guarantees can we maintain unconditionally?

In the PoS chain, slashing (the penalty for the quality deposit) can provide the first question above, that is, when the attacker tries to destroy the entire chain, it will require a very high cost. In a blockchain where no shards exist (such as the current Ethereum 1.0 chain), each node that verifies all blocks will fulfill the second problem above by providing the following two guarantees: (i) the longest The chain is valid, and (ii) the longest chain is also available (by this link [12] to see the rationale for the importance of "data availability").

In Ethereum 2.0, we implement defensive defense by means of sharding , by combining randomly selected certifier committees, based on which most certifiers will maintain an honest security model for effectiveness and availability. Guarantee, and prevent lazy verifiers by proof of custody (ie, if the verifier is "lazy" not participating in the verification will face punishment), through fraud proofs and data availability proofs [9] Detect invalid and unavailable chains without having to download and verify all data. This will allow the client to reject invalid and unavailable chains even when this chain is supported by most PoS certifiers.

The client can detect the reviewability of the transaction in a way that preserves consensus (see [13]), but the research in this area has not been integrated into the Ethereum roadmap.

The expected security features are shown in the following table:

Incentives set by Casper

Basic rewards
During each epoch ( in Ethereum 2.0, each generation of 64 blocks (approximately 6.4 minutes) is called an epoch ), each verifier must perform an "attestation", ie a pair of heads ) Sign the vote (the chain is the top block). If the verifier's "proof" is included in the chain, the verifier will receive a reward, which consists of five parts:

  1. a reward for the “certification” being included in the top block;
  2. Reward for “certification” to clarify the correct epoch checkpoint (note: the last slot in each epoch period is called the checkpoint), and the slot is the time required to generate a block for the protocol ( 6 seconds));
  3. The reward for the “certification” that clarifies the correct chain head (top block);
  4. The reward obtained by the “proof” being quickly included in the chain (if the “proof” is included in the chain after 1 slot, then the verifier will receive the full reward; if it is included after n slots On the chain, the reward won will be 1/n of the total reward;
  5. The reward for the "success" that clarifies the correct segmentation.

In each case, the actual reward is calculated as follows. If B is the basic reward and P is part of the verifier performing the required "proof" operation, then the reward obtained by any verifier performing the required action will be B*P, and any action that would have been performed but A certifier who has not done so will be penalized by -B. The goal of this "collective reward" mechanism is to "if someone performs better, then everyone performs better", thereby limiting the vandalism. (See this article [13] for a description of the vandalism and why it is important to limit these factors)

It should be noted that the fourth point above is an exception; this reward depends on the delay in the adoption of the “proof”, not on the verifier's behavior, and the risk of no punishment.

The basic reward B itself is calculated , where D1…Dn is the size of the verifier's quality deposit, and k is a constant. This is a compromise between two common modes: (i) setting a fixed reward rate, ie k*Di, (ii) setting a fixed total reward, ie .

The main argument against (i) is that this model brings two levels of uncertainty to the network: the total amount of currency issued is uncertain, and the total number of participating pledges is uncertain (because if the fixed reward rate is too low, That basically no one will get involved, which threatens the entire network; and if the fixed reward rate is too high, there will be too many people involved, making the circulation of the coins unexpectedly high).

The main argument against (ii) is that this model will make the network more vulnerable to “discouragement attacks”, as described in [13].

The basic rewards are used to compromise both approaches and avoid the worst outcomes of each approach.

The reward that the block proposer obtains after including the "certificate" in the block is 1/8 of the basic reward. The purpose is to encourage the block proposer to monitor the information as much as possible and accept as much information as possible.

Online time to break even
Suppose there are two types of verifiers: (i) an online verifier that works, and (ii) an offline verifier. If the former type of verifier is part of P and the basic reward is B, then the online verifier is expected to get The rewards are: rewards in the first, second, third, and fifth cases above B*4P + rewards in the fourth case above 7/8*B*(P+(P*(1-P))/2+(P *(1-P)^2)/3+…) (Because the "proof" is likely to be included in the chain due to the absence of the verifier) ​​+ Block proposer rewards 1/8*B*P . The certifier is absent (that is, it should have been verified and has not actually been verified). The penalty will be B*4. Therefore, if all other verifiers are online, the verifier will receive a B*5 reward when online, and will be penalized by B*4 when offline, so if the verifier is online for ≥4/(4+ 5) ≈ 44.44%, then the verifier will be able to be in a state of no loss (break-even). If P=2/3 (that is, the online verifier accounts for 2/3 of the total number of all verifiers), then the certifier will receive the reward for online ≈B*(2/3*4.125+7/8*0.81 ) ≈ B * 3.46, or will be in a state of balance due to online time ≥ 53.6%.

However, if P is less than 2/3 (ie, the number of online verifiers is less than 2/3 of the total), then the off-line certifier will be penalized as "inactivity leak" .

Inacivity leak
If the Ethereum 2.0 chain fails to achieve finality during more than 4 epochs, it will add additional penalties, such that the maximum possible reward is zero (verifiers who fail to perform the operation correctly are penalized), A second penalty will also be added, which will increase proportionally based on how many epochs failed to achieve finality. This is to ensure that if more than 1/3 of the certifiers go offline, these offline certifiers will be subject to more severe penalties, and this penalty will multiply over time. This will have three effects:

  • Offline certifiers will be subject to more severe penalties because the certifier's offline will actually prevent the block from being finalized;
  • The goal of serving “anti-correlation penalties” (explained further below)
  • Make sure that if more than 1/3 of the certifiers are offline at the same time, the number of certifiers that will eventually go live will be restored to 2/3 of the total, as offline certifiers' declining quality deposits will cause them to be evicted from the certifier.

Based on the current parameterization, if the block stops the process that is finalized, the verifier will lose 1% of the quality deposit after 2.6 days, lose 10% of the quality deposit after 8.4 days, and lose after 21 days. 50% quality deposit .

This means that if 50% of the certifiers are offline, the block will be restarted after 21 days , because after 21 days, all offline certifiers have lost 50% of the quality deposit (16 ETH), and If the certifier's quality deposit is less than 16 ETH, it will be evicted from the certifier.

Slashing & anti-correlation penalty
If the verifier is found to be in violation of Casper FFG's slashing condition, the verifier will be punished (a portion of the quality deposit is lost ); if at about the same time there are other verifiers who are subject to slashing (specifically, from The time from the first 18 days that the verifier is punished to the time the verifier exits the verifier, that penalty will be three times the previous one . There are several purposes for doing this:

  • Only when a verifier fails to act with many other verifiers at the same time, the verifier's behavior will cause real damage to the network, so the punishment in this case will be more serious;
  • This severely penalizes the actual attack, but imposes very minor penalties for a single independent mistake that may not be malicious;
  • This ensures that small verifiers will take less risk than large verifiers (because under normal circumstances, only large verifiers will fail at the same time);
  • This inhibits everyone from joining the largest verification pool.

BLS signature

We will use the BLS signature because the BLS signature is aggregate-friendly: by the keys k1 and k2 (the corresponding public key K1=G*k1, K2=G*k2, where G is the base point of the elliptic curve) Any two signatures S1 and S2 can be simply aggregated by elliptic curve point addition: S1+S2. This allows for the generation of thousands of signatures, the marginal cost of each signature being a data bit (used to indicate the presence of a particular public key in the aggregate signature) and an elliptic curve addition for the calculation. It should be noted that this form of BLS signature is vulnerable to rogue key attackes: if you see other certifiers have published the public key K1…Kn, then you can generate the private key r and publish it. A public key G*r-K1-…-Kn. The aggregated public key will be G*r, which will enable it to perform signature verification on the aggregated public key. The standard way to solve this problem is to require a proof of possession: basically a signature that verifies the public key for the private key k and the public key k. This ensures that you control the private key that is connected to your published public key.

We use the signature of the certifier's mortgage message as proof of ownership, which clarifies the signed key and other important information, such as the withdrawal key.

Randomly selected verifier

The seed used to achieve randomness is updated in each block by "mixing in" a value that the block proposer must expose (ie, seed <- hash(seed, new_data)). Just like the escrow subkey, the certifier's values ​​are determined immediately after the certifier pledges , and the third party cannot calculate the subkey, but when the subkey is exposed by the resource Verify it (this mechanism is sometimes called RANDAO). This ensures that each block proposer has a "bit operation" on the random seed: the proposer can propose the block or not. If the proposer does not propose a block, then many rewards will be missed. In addition, because of the large size of the persistent committee and the cross-linking committee, the manipulation of randomness is almost certainly impossible for a small number of attackers to control 2/3 of the verifiers in any committee .

In the future we plan to use VDF (verifiable delay function) to further increase the robustness of random seed defenses.

Scrambled verifier (Shuffle)
During each epoch, we use swap-or-not shuffle [13] to disrupt the certifier and assign responsibilities. This algorithm ensures that since the shuffle is a permutation, each certifier is designated as a member of a cross-linking committee during each epoch (thus making the verifier's workload stable and Reduce the possibility of profitability caused by random manipulation);

Since shuffle is a permutation, each certifier is designated as a member of a long-term committee during each epoch;

Cross-linking committee
During each epoch, each shard undergoes a cross-link, which randomly selects 2/3 of the certifiers from the sharding committee (the number of certifiers per sharding committee is approximately 128). The hash of all the data contained in the shard since the last cross-link is signed (since the cross-linking in the shard may fail, so the hash can represent up to 64 epoch data; if there are many consecutive shards The cross-linking failure has failed, and it may take several successful cross-links to catch up.) The number of certifiers per sharding committee is set to 128 because this is the minimum number of committee members who resist the attacker's chance of controlling 2/3 of the chances of controlling less than 1/3 of all certifiers. Through the binomial theorem, the probability of an attacker controlling 2/3 of the committee members is 5.55*10^(-15)

Since there will be 1024 segmentation chains in the Ethereum 2.0 system, this means that for each segmentation chain to be cross-linked during each epoch, we will need 131072 verifiers (note: 1024*128=131072) ), or, in the Ethereum 2.0 system, about 4.4 million ETHs need to be pledged (in fact, if the ETH of the pledge is less than this number, the number of cross-links of the slice chain will be less). And if you increase the minimum pledge limit (for example, to 1024 ETH, 32 ETH is currently determined), that means we won't be able to get enough certifiers to achieve on each shard chain during each epoch. Crosslinking unless all ETHs are plucked in.

After each epoch (64 blocks, approximately 6.4 minutes), the beacon chain reorganizes a fragmentation committee (ie, scrambles the verifier) ​​for each fragmentation chain . The quick scrambling of the verifier is to ensure that if an attacker wants to attack a fragment chain, the attacker will need to quickly destroy (control) the fragmentation committee.

Long-term committee
Within each 27-hour period, the system will select a permanent committee for each segment chain . At any time, each certifier in the Ethereum 2.0 system is a member of one of the long-term committees. The long-term committee is responsible for proposing the fragmentation block, providing the user with some degree of guarantee about the fragmentation block (until the fragmentation block is included in a certain cross-linking) , and the light client can use the long-term committee. In order to maintain the stability of the P2P network and the efficiency of the light client, the long-term committee has relatively few changes (and the fragmentation committee changes every 6 minutes). The maximum number of certifiers per long-term committee is 128, so if the number of certifiers in the system exceeds 131072, then at any time there will be certifiers not selected into any long-term committee; this reduces unnecessary verification waste.

In order to further maintain the stability of the network, not all certifiers will rotate from the long-term committee of the n- time period to the long-term committee of the n+1 time period; instead, the rotation of each verifier will be delayed until the next A random time point of the time period is rotated again.

LMD GHOST fork selection rule

The beacon chain uses the LMD GHOST fork selection rule, as described in [15]. The LMD GHOST fork selection rule combines information from all certifiers to ensure that no block can be reversed under normal conditions . Since the fork selection rule relies on all certifiers, this also ensures that unless the attacker controls more than 50% of the certifiers, the block cannot be reversed, because in this case the attacker cannot get very random by manipulating randomness. Great advantage.

Beacon chain/slice chain structure
The structure of the Ethereum 2.0 sharding system consists of a central "beacon chain" that coordinates all activities and 1024 sharded chains. Each slice chain is periodically connected to the beacon chain by means of a crosslink. An alternative to this shard structure:

(1) Sign the fragment block by the committee, and put all the fragment blocks directly into the beacon chain;

(2) There is no beacon chain, but all the fragment chains are connected by some structure.

The reason why the above structure (1) is discarded is that it is preferable to set a 6 second slot for the slice chain block, but 1024 crosslinks on the beacon chain every 6 seconds will result in a letter. The standard chain is subjected to very high loads.

The reason for the abandonment of the structure in appeal (2) is that the central radiating beacon chain structure is easier to implement and understand than any complex structure.

Segment chain design
Each shard is a semi-independent chain that can handle blocks more quickly than the cross-linked aggregate block (target is 3-6 seconds). This allows the transaction to be quickly confirmed to a certain extent by the long-term committee of the segment before it is confirmed by the beacon chain (by means of cross-linking). The fragmentation chain structure is such that each block is certified by each verifier of the fragmentation committee, which ensures the simplicity of the verification and ensures that the fragmentation block has a fairly high degree of confirmation ; Most applications that handle lower value should rely on a single confirmation.

When each epoch is turned on, each tiling block contains a pointer to its parent block and a pointer to the beacon block. This semi-tight coupling between the beacon chain and the slice chain is to (i) ensure that the slice chain knows about its long-term committee (because this information is generated by the beacon chain) and (ii) enables verification The slice chain block is called a feasible way to determine which chain is the standard beacon chain.

The fragmentation chain state (reward, penalty, history accumulator) is deliberately designed to be smaller than the block size, in order to ensure that if the fraud proves necessary, the slice chain state can be completely placed in the beacon chain (although this may only The limit will be relaxed in phase 2. In phase 2, the state of each individual execution environment will be limited by this size, but all states will be merged very large, so the fraud certificate will require Merkel proof).

Cross-linked data
Cross-linking includes data_root, the Merkle root that contains the data structure of all the tiling blocks in a shard since the last cross-link. This cross-linked data structure + root has multiple goals:

  • Make the beacon chain know which is the normalized fragment chain block;
  • Create a simple byte array that can be verified for different methods (hosting proof, data availability proof) and ensure that the fragmentation block can be fully restored with multiple cross-links.
  • Create a simple byte array to evaluate the fraud proof.

Certifier life cycle

The certifier sends an ETH pledge by sending a transaction (this transaction invokes a function of the deposit contract deployed on the Eth1.0 chain), and we end up adding pledges to the Eth 2.0 chain. . This operation of the verifier is clear:

  • a public key corresponding to the private key used to sign the message;
  • Withdrawal credentials (ie public key hashes, public key hashes will be used to withdraw funds when the verifier completes the verification)
  • Quality deposit

These values ​​are all signed by the signature key. Separate the signing key from the withdraw key to make the more secure risk of the withdrawal key safer (the withdrawal key is offline, not shared with any pledge pool, etc.) ), and the signature key is used to sign the message during each epoch.

All pledges of Merkle root are kept in the deposit contract. Once the verifier's pledge of Merkel is included in the Eth2.0 chain (via the Eth1.0 data voting mechanism), the Eth2.0 block proposer can submit a Merkel certificate for the pledge and initiate pledge process.

When a certifier sends a transaction to a mortgage contract, it immediately joins the certifier registration form, but the certifier is initially inactive. The verifier will only be activated after at least 4 epochs; the verifier needs to wait for at least 4 epochs (about 6.4 minutes per epoch) to ensure that RANDAO is not manipulated and if many verifiers join at the same time Come in, that N may exceed 4. If the total number of certifiers already in Eth2.0 is |V|, then the maximum number of certifiers that can be newly added during each epoch period will be max(4, |V|/65536); if more certifiers want Join in, then they will need to queue and the system will process as quickly as possible.

drop out
When a verifier exits from the Eth2.0 system (either by issuing a voluntary exit message or exiting due to being penalized), the verifier will also need to be queued to exit from the system, during each epoch period. The maximum number of certifiers that can exit is the same as the maximum number of certifiers that can be joined as described above. The reason for setting this new join/exit queue is to ensure that the total number of certifiers in the system does not change too fast between any two points in time , which ensures that the certifier is logged in as often as possible (if the certifier The total number is ≥ 262,144, and the verifier is guaranteed to be able to log in every 1-2 months), which guarantees that there is still certainty between the old and new chains of Ethereum. Related principles refer to [17] and [18].

Once the verifier successfully exits by queuing, it will take approximately 27 hours to withdraw . This waiting time has several effects:

  • This ensures that if the verifier has misconduct, there will be time to capture the misconduct and slashing the verifier;
  • This gives the system time to issue the shard reward for the last period of time to the verifier;
  • This provides time to challenge the hosting certification.

If the verifier is fined, the withdrawal time will be further delayed by approximately 36 days . This is a further penalty for the verifier (and forcing them to hold ETH; this is a penalty for those who want to support Ethereum but just accidentally make mistakes, which makes those who want to destroy the Ethereum blockchain The verifier will be penalized more severely, and time is reserved for the system to calculate the number of other verifiers who were also fined during this period.

During phase 0, the verifier who wants to "withdraw" is actually unable to make a withdrawal ; at a later stage, the funds withdrawn by the verifier will be transferred to an execution environment.

Effective balance
Most of the calculations based on the verifier balance use the verifier's " effective balance (EB)"; the only exception is the calculation of increasing or decreasing the verifier's balance. The EB will be adjusted to equal floor(B) only if the verifier's balance B is lower than EB or higher than EB+1.5. This is to ensure that the effective balance does not change often, reducing the amount of hashing required to recalculate the status during each epoch period; on average, only the balance needs to be updated, and each certifier only needs to update the effective balance relatively less .

Bifurcation mechanism

The Fork data structure contains (i) the current "fork ID", (ii) the previous "fork ID" and (iii) the two forked slots. The fork ID of the current block height affects the valid signature of all messages; therefore, messages signed with a fork ID are not valid for validation functions that use any other fork ID. Bifurcation can be done by adding a state transition to a "fork slot". The signature verification function will verify the message using the fork ID of the slot in which the message is located. This fork ID may be the previous fork ID or the current fork ID.

If any user doesn't want to join a fork, just keep on staying on the chain that doesn't change the fork ID in the fork slot. Both chains can continue to exist, and the verifier can freely verify the two chains without penalty.

Remarks: The translation has been deleted.

Original link:


Source: Unitimes

Author | Vitalik Buterin

Compile | Jhonny