Ethereum 2.0 design principles

Many articles are discussing the roadmap, research proposals and development status of Ethereum 2.0. However, there are not many articles about the design principles and invariants behind the internal operation of Ethereum 2.0.

For Ethereum 2.0, a major change that needs to be coordinated and takes years, the key to its success is the need for a clear set of invariants and requiring implementers to be consistent with Ethereum's philosophy. . This article will explain some of the design decisions, the context, and why these designs are critical to the future of the Ethereum 2.0 agreement.

Recap history

Since the birth of the Ethereum network, the motivation for converting Ethereum from PoW to PoS has evolved. At the time, Vitalik Buterin was exploring a viable solution to overcome the shortcomings of the pure PoS protocol, providing greater security than PoW. Specifically, Vitalik and the Ethereum research team designed a mechanism called slasher to punish malicious attackers and punish their deposits in the PoS protocol (Buterin 2018).

Later, Vlad Zamfir joined the Ethereum project, and most of the work in 2014 focused on solving known long-range attacks against PoS .

This is a remote attack when an attacker easily builds a chain longer than the current specification chain from scratch and causes other participants in the network to recognize the state of the new chain.

This is almost impossible to happen on PoW chains such as Bitcoin or Ethereum, as this requires a lot of computing power. However, since the PoS mechanism does not rely on computing power, it can cause a network crash in the event of such an attack (Zamfir 2014) .

Vitalik Buterin and Vlad Zamfir agree that there is no other viable solution to defend against remote attacks (Buterin 2018), except that the client is strictly prevented from synchronizing a chain that exceeds a checkpoint .

This approach means that new nodes in the network only need to start synchronizing from the latest "checkpoint" that is recognized as "finalized" by other nodes in the network, rather than starting from the initiating block. That is, when a new node joins the network, the new node has an inherent trust in the old node. This phenomenon is known as the weak subjectivity of PoS. When a new node joins the network, there is a subjective trust for blocks that have been "finalized" and not tampered with among all participants in the network .

During that time, Vitalik Buterin and the Ethereum Foundation's Virgil Griffith were working on the original version of the Casper Proof of Stake white paper on the ArXiV (https://arxiv.org/abs/1710.09437) website. For a long time between 2014 and 2017, Ethereum tried to cover a PoS based finality system on the Ethereine backbone that currently runs the PoW mechanism. At the same time, Ethereum developers are also working on state sharding as a way to extend the Ethereum network.

By 2018, both plans had been significantly promoted, and after an iconic research meeting in Taipei in March 2018, the Ethereum research team proposed to merge Casper Proof of Stake and Sharding into one For the development of the Ethereum Serenity (also known as Ethereum 2.0) .

Why launch Ethereum 2.0?

This article aims to explain the design philosophy behind the core question of “Why are you launching Ethereum 2.0?”

Of course, a comprehensive change to the current Ethereum PoW chain consensus agreement and data integrity is not easily achieved with just one hard fork – a new system is re-started and the current Ethereum 1.0 chain is completely Abandon, isn't it easier?

One of the challenges we encountered when building Ethereum 2.0 was the need to involve the Ethereum community in this challenge and to give community members a clear understanding of the tremendous benefits and needs behind the transition to Ethereum 2.0. After learning about the huge responsibilities that accompany this paradigm shift, it is now a great time to build Ethereum 2.0.

Regardless of whether we are willing to admit, the current blockchain and cryptocurrency are still in the embarrassing phase, and the decisions we make today will drive the accelerated growth and adoption of blockchains and cryptocurrencies in the years to come. We have been waiting for a long enough time to move to the PoS mechanism, as is the Ethereum application. The best time to build Ethereum 2.0 is now available, and the relevant teams are ready.

Challenge ahead

A pure Layer 1 extension can be costly because the fragmentation of the blockchain makes it impossible to globally verify the transaction (that is, all miners in the network (except for malicious attackers). ) will be committed to confirming all transactions, the power of all miners to ensure the safety of the entire chain. Fragmentation means that transactions in a segment chain are only verified by a part of the certifiers in the entire network (because The certifiers in the entire network will be assigned to each fragment chain responsible for transaction verification), so that the security of a single shard chain is lower than that of the entire chain; the current Bitcoin and Ethereum can Global verification of transactions ensures the security of the entire network.

The key question is: How do we get scalability without sacrificing decentralization and security? Many of Ethereum's competitive blockchain platforms (such as EOS) have chosen a centralized approach to address this issue. Ethereum chose a different way, dividing the state of the network into 1024 parallel-running fragment chains, each of which is unified by a root called a beacon chain. Chain coordination .

The beacon chain runs a full Casper PoS mechanism, and there is no proxy or centralized voting rights like EOS in the whole system. In this way, each node only needs to be responsible for processing a portion of all transactions in the entire network, and many blocks can process transactions in parallel, which linearly increases the throughput of the entire network .

The Ethereum 2.0 specification attempts to answer the following questions:

If there is no global verification of the transaction, what impact will the security of the network have? How should the selected participants be selected to prevent the emergence of a verification monopoly? How should incentives be designed to maximize data availability and enthusiasm for participation?

After years of research, exploration, and understanding of the trade-offs that need to be made, Ethereum explored the choice of PoS as its consensus algorithm. For the reasons discussed in the text, the verifier is guaranteed to be rewarded, while the verification entity (individual or business) is treated equally in the Casper PoS agreement and is equal in participating in the Certifier Board and receiving rewards/penalties. Probability .

The global verification of the transaction turns into indirect verification: each transaction in each segment chain will be first verified by the certifier in the segment, and the certifier of the segment will submit the checkpoints In the beacon chain, the beacon chain plays the role of “coordinator” in each fragment chain in Ethereum 2.0 .

Invariant of Ethereum 2.0 design

A key pillar of protocol design is to understand which invariants the protocol will operate under .

For Ethereum and its developer community, there is an unchangeable design decision that is critical to the future of Ethereum 2.0. We can divide the core of Ethereum 2.0 into the following points:

  • Participation in the network should be without permission (permissionless);
  • The scope of Layer 1 should be concise and compact;
  • The agreement should be expressive to the greatest extent possible, regardless of its future use;
  • The network should be lively and effectively recover from any catastrophic scenario;
  • The complexity of the protocol should be distinguished from the complexity of application development.

01. No permission is required

The significant difference between Ethereum 2.0 and other “next generation” blockchain platforms is how to participate in consensus.

The only requirement in the Ethereum 2.0 to be called a certifier is to hold 32 ETHs. In the Ethereum 2.0 system, there is no proxy, there is no vote to select the verification node, and there is no centralization mechanism that determines who can participate in the verification. More importantly, the hard requirement for each entity to participate is to pledge 32 ETH, which makes the verifiers in Ethereum 2.0 equally treated.

However, any individual may have a certifier identity (that is, a user holding a large number of ETHs pledges across multiple clients and has multiple certifier identities). This is just a decision to simplify the security and simplicity of a consensus agreement. From the perspective of incentive design, it is important to treat all participants equally when voting for blocks, which is also important for formal modeling. 1 verifier = pledge 32 ETH, no more and no less.

Other blockchain platforms often address scalability issues by taking a more central approach to transaction verification; but for Ethereum, this is not an option to consider.

02. Keep it simple, but with the greatest degree of expressiveness

Ethereum 2.0 strives for simplicity and compactness in its core definitions and goals. At the basic level, Ethereum 2.0 is an extensible, license-free platform for building decentralized applications.

In the Ethereum 2.0 system, there is no need to introduce application logic for a reason. We can compare the Ethereum 2.0 system to the kernel of a Linux system — it shouldn't be determined by the operating system itself that it should include certain features or envision its use cases, but should be determined by the developer who builds the application for the system kernel.

03. Guaranteed safety

Ethereum 2.0's PoS model — Casper the Friendly Finality Gadget — operates under a range of incentives designed to maintain a high level of vitality and network engagement. Ethereum 2.0 is extended by Casper and leverages its features to protect the security of the entire slice chain network. That is to say, Ethereum 2.0 guarantees the sharing security of 1024 segmentation chains and beacon chains in the system by using the concept of chain finality threshold.

The premise of PoS is that the verifier can be rewarded after performing the assigned verification work as expected, or because of a small loss of partial deposit due to offline status, or if the verifier is maliciously violating the agreement, it is severely punished (quality deposit) Was fined.)

Although this premise is very succinct, the relevant details are the key to success. Casper's economics quickly became more complex once we realized that we must consider not only the behavior of each verifier but also the behavior of all the verifier committees.

An open question for the PoS chain is usually: when should the malicious act be punished, and how the verifier should be punished accordingly according to the severity of the action (the severity should be different and the penalty should be different). In other words, we need to find a comprehensive punishment that covers all extremes and keeps it simple .

Since the protocol relies on the certifier's activities, the operation of the protocol also depends on a persistent time scale, and there may be cases where an honest certifier cannot participate in the verification. An honest verifier may be offline due to a power outage, network failure, or other reason, but we need to explicitly distinguish between offline penalties and malicious behavior penalties .

Part of the design principle of Ethereum 2.0 is to make a huge cost to any attacker who attempts to break the agreement. In other words, 51% of attacks on other blockchain platforms will cost a lot in the Ethereum 2.0 chain, and even have the opposite effect. That is, the finality in the reversal protocol will make the attacker very visible to other honest verifiers, allowing the community to coordinately remove the malicious attacker through soft forks, rendering the attack ineffective.

Another limitation of the PoS-based system is the verifier's dilemma, that is, the verifier in the system is too lazy (offline), and blindly believe that other verifiers in the protocol will perform block verification in the correct way, so These lazy verifiers are negligent. These verifiers save bandwidth or general computing needs by not performing their verification duties unless they face significant penalties. In this case, this problem can be alleviated by adding very serious penalties and challenge mechanisms for missing or erroneous signature information in the network.

Evidence Incentives for Ethereum 2.0

The certifier incentives for Ethereum 2.0 are as follows:

01. Certifier offline: Quadratic Leak

Ethereum 2.0 relies on the Byzantine fault tolerance threshold, which means that 2/3 of the verifiers in the network are honest verifiers .

The penalty for certifiers who do not participate in the verification is called "invita leaks". If a slice chain is not finalized during more than 4 epochs, the reward for the verifier will be as strict as possible, ie the maximum expected reward will be 0, so the verifier needs to perform the verification correctly. Otherwise, there will be more punishment. The degree of punishment will be proportional to the time since the last chain was finalized (ie, the longer the last time from the last time, the greater the penalty), to organize the verifier offline.

[Note: In the Ethereum 2.0 phase, each time 64 blocks are generated (approximately 6.4 minutes, called an epoch), the beacon chain will re-scramble the certifiers and redistribute them into all the shards. ]

The longer the certifier goes offline, the exponential growth . This punishment is called “ quadratic leak ”. This punishment mechanism is designed to be: short-term offline status will not be punished, but offline for a long time will have a great negative impact on the verifier, which can be very Goodly take into account the expected reality. The funds lost by such punishment will be destroyed, not distributed to other honest verifiers.

02. Intentional malicious behavior: slashing

In the early proposal for the Ethereum PoS, malicious verifiers would be subject to a huge penalty called slashing, which usually only discussed penalties for individual malicious verifiers, without discussing the verifier collusion The seriousness. The network will only be affected if a large percentage of the verifiers work together to maliciously operate the network.

According to Byzantine fault tolerance, the penalty for malicious actors will be three times the number of verifiers who also conduct malicious acts within a certain time interval. This helps to punish a large number of coordinated attacks and also prevents malicious verifiers from coming together. That is to say, for the verifier, it is very disadvantageous to jointly attack the network.

Slashing is achieved through an exposing mechanism , in which the verifier can obtain rewards by discovering other certifiers' punishable aggression behavior, and as compensation can obtain funds that other verifiers are punishable.

03. Verifier reward

According to Vitalik Buterin's Ethereum Serenity design principles (https://notes.ethereum.org/s/rkhCgQteN), he outlines each verifier within each epoch time period (each block is called an epoch) ) 4 clear components of the basic rewards obtained:

  1. 1/4 of the rewards obtained by proof to determine the correct epoch checkpoint;
  2. 1/4 of the rewards obtained by proof to determine the correct chain head;
  3. 1/4 of the rewards obtained by demonstrating that the block can be quickly entered into the chain;
  4. 1/4 of the rewards obtained by proving to determine the correct fragmentation block;

In addition, there are additional bonuses above this basic reward based on the number of validators participating in the verification. This additional reward is used to motivate the verifier to operate correctly and together to drive honesty in the network. The timetable for awards should be consistent and straightforward. Adding more complexity will make the whole system more error-prone and more difficult to understand from a macroeconomic perspective.

Distinguish the complexity of the protocol from the complexity of the application

It can be said that the Ethereum 2.0 roadmap is one of the most ambitious plans that will last for many years. The plan draws lessons from the entire blockchain field and creates a solution to the scalability dilemma. Agreement, the agreement will be able to run forever.

There has been a lot of discussion about how shards will effectively increase the developer experience. The basic rationale is that it is very difficult to disregard the needs of application developers in the design of the Ethereum 2.0 protocol, because the highly complex systems of the slice chain require that the slices need to interact with each other (cross-chip communication).

At first glance, this view makes sense, because Ethereum 2.0 is very daunting from the outside, and there is still a lot of uncertainty in the implementation of smart contracts in Ethereum 2.0. But in fact, the facts are subtle.

Application developers will only need to understand a small portion of the Ethereum 2.0 protocol . Ordinary smart contract developers do not need to know the internal structure of the finalator of the validator registry or beacon chain.

Therefore, phase 0 (beacon chain phase) is completely removed from the application layer. Recent Phases 1 and 2 also present some very powerful proposals that suggest a higher degree of abstraction of the execution environment, which makes Ethereum 2.0 more powerful and simpler.

In the worst case, the wallet/app developer needs to know some details about the cross-slice transaction in order to show real-time transaction settlement with some tips. Today's computer operating systems and internal structures are much more complex than they were 10 years ago, but most application developers don't need to understand the hidden internal constructs that make the computer architecture powerful. Separating the two is the core of good architectural design, and we can also think of it as a design invariant that needs to be remembered when building Ethereum 2.0.

Build a real world computer

In summary, Ethereum is Turing-complete, which means that Ethereum can run any type of code in the same way as the current computer, even though it is still a slow, single-threaded computer.

Today's Ethereum is similar to the early weak processors. Currently running applications on Ethereum is expensive because the protocol has built-in mechanisms to stop the tragedy of the commons that plague many public utilities.

Ethereum's vibrant developer community has never stopped innovating on the current network, whether it's its core layer (Layer 1) or Layer 2. From a governance perspective, the upgrade of Ethereum's future plans will not be flat and without setbacks. If the future Ethereum 2.0 goes online for a few years, we feel that we have received constraints and hope to build Ethereum 3.0, which also means our failure in the core design of the former.

Upgradeability should be incorporated into the protocol in a way that does not require a risky hard fork. That is to say, once the system is in operation for a long time, the innovation at the Layer 1 level should be kept to a minimum or even zero state.

We still have a long way to go, but we are cautiously reminding ourselves why we should build this software, and where we want Ethereum to go in the next 10 years, we can also write a more versatile way to withstand the test of time. Great code.

Reference link:

Https://www.tokendaily.co/blog/design-principles-of-ethereum-2-0

Author | Raul Jordan

Compile | Jhonny