Understand Ethereum 2.0, first understand the Ethereum 2.0 core design principles

The Ethereum Technology Application Conference was held in Beijing today. The VIP members of V God, the Ethereum Foundation, and the EAP Developer DAPP Developers gathered together. The conference focused on Ethereum 2.0.

Volkswagen also pays attention to the progress of Ethereum 2.0. The conference V God gave a speech entitled "Ethernet 2.0 cross-chip transaction".

There have been many articles discussing the Ethereum 2.0 roadmap, research proposals and their development status. However, there are not many articles about the design principles and invariants behind the Ethereum 2.0 internal operation. This paper aims to explore the design principles behind Ethereum 2.0. .

About the author: Raul Jordan, Raul Jordan is a co-founder of Prysmatic Labs and a partner at zk Capital. He is a graduate of Harvard, blockchain engineer and Ethereum developer, specializing in Prysmatic Labs segmentation technology development.

Many articles are discussing the roadmap, research plans and current status of Ethereum 2.0. However, there are not many public writings about the design principles and invariants behind their internal work. For this kind of coordinated effort for many years, having a clear set of invariants is critical to its success, and this will allow the practitioner to think about the side of Ethereum philosophy.

This article will explain some of these design decisions, backgrounds, and their importance to the future of the agreement.

First, history

Since the birth of the Ethereum network, attempts to convert Ethereum from PoW to PoS are a major advance. Vitalik Buterin was exploring a viable solution to prevent the flaws of immature PoS and provide greater security than PoW.

In particular, he and the Ethereum research team designed a mechanism called slasher to punish malicious actors in PoS and cut all of their mortgages (Buterin 2014).

Mathematician Vlad Zamfir subsequently joined the project, and most of the work in 2014 focused on solving so-called PoS long-range attacks.

A remote attack occurs when an attacker can create a full chain from the beginning of the current specification blockchain to convince others on the network in the new canonical state.

This is almost impossible to perform in PoW because it requires a lot of composite computing power. However, PoS does not depend on computing power and therefore crashes under such attacks (Zamfir 2014).

Both Vitalik and Vlad agree that there is no viable remote attack solution (Buterin 2015) other than “strictly preventing customers from synchronizing chains earlier than a checkpoint”.

This means that no new nodes in the network need to be synchronized from the creation block, and only new nodes in the network need to be synchronized from the nearest "checkpoint" synchronization network.

That is, when a new node joins the network, the old node has an inherent trust. This phenomenon was later called the weak subjectivity of PoS. When a new node joins, there is subjective trust between the "finalized" and "irreversible" blocks among the participants in the network (Buterin 2018).

During this time, Vitalik and Virgil Griffith from the Ethereum Foundation are working on the initial release of the Casper PoS white paper on ArXiV (Buterin and Griffith 2015).

In 2014-2017, it was a long period of time, which marked Ethereum's attempt to cover a PoS-based termination system on the currently running PoW chain. At the same time, people are working hard to implement state sharding as a partitioning scheme to extend the Ethereum blockchain.

However, in 2018, when the two initiatives were combined, and after the Taipei landmark research conference in March, the Ethereum research team proposed to merge Casper PoS and the shard into a project called Ethereum Serenity. Also known as Ethereum 2.0.

Second, why choose ETH 2.0?

This article explains the design rationale behind the core issue: "Why use Ethereum 2.0?"

Of course, a thorough review of the consensus agreement and data integrity of existing systems is not done easily by hard forks—is it easier to simply create a new system from scratch and completely abandon Ethereum 1.0?

One of the challenges we faced when building ETH 2.0 was the need to educate the community about this challenge and clearly understand the huge benefits and needs behind the transition to ETH 2.0.

While understanding the enormous responsibilities that this paradigm shift brings, there is no better time to build ETH 2.0 than it is now. Whether you like it or not, the encryption industry is still in its infancy, and the decisions we make today will have a compounding effect on accelerating growth and years of adoption over the years.

The migration to PoS has been waiting long enough for the scalability of the Ethereum application. There is no better time to build ETH 2.0 than now, and the team is ready.

Third, challenge the future

The inexperienced Layer 1 extension can result in significant security costs: segmentation chain segmentation prevents global transaction verification, as is currently done with Bitcoin and Ethereum chains.

The key question is: How to achieve scalability without sacrificing decentralization or security? Many competition chains aim to find a centralized route as a means of solving this problem.

Ethereum chose a different approach: dividing the network state into 1024 shards, which represent a set of homogeneous blockchains, each block chain consisting of a single root chain called a beacon chain. coordination. The beacon chain runs on the full Casper PoS, with no delegation or centralized voting rights.

In this approach, each node is only responsible for a portion of the transactions that occur throughout the network, and many blocks can occur in parallel, thereby linearly increasing overall network throughput.

This solution is designed to answer the following questions:

If the transaction is not globally verified, how will the network's security profile change? How to choose to verify participants while preventing the formation of cartels? How to design incentives to maximize data availability and encourage active participation?

After years of research, exploration and understanding of trade-offs, Ethereum sought PoS as its consensus algorithm. As mentioned earlier, the reward is deterministic, and the verification entity has the same treatment in the agreement, the same probability of participating in the committee, and the same reward/penalty.

Global transaction verification becomes indirect verification. Each transaction in each slice chain will first be verified by the verifier in that slice, and the beacon chain will play the role of "coordinator" on ETH 2.0 .

     

Fourth, design invariants

A key pillar of the protocol design is to understand which invariants the protocol will operate under. For Ethereum and its developer community, having a list of design decisions that are not negotiable is critical to the future of the project.

We can break down the core of ETH 2.0 into the following points:

1. Participation in the network should be without permission ;

2, Layer 1 should be concise , abstract and compact in its scope;

3. The agreement should be the most expressive , not assuming its future use;

4. The network should tend to be viable to effectively recover from any catastrophic scenario;

5. Separate the complexity of the protocol from the complexity of application development.

1. No license required

The significant difference between Ethereum 2.0 and other “next generation” blockchains is how to determine participation consensus. The only requirement for Evidence 2.0 for the verifier is to have 32 ETHs.

There is no proxy here, no vote is required to select the verification node, and there is no centralized constitution to decide who to participate. More importantly, the certifiers in Ethereum 2.0 are treated equally: the hard requirement for each entity to participate is 32 ETHs.

However, any individual can have multiple certifiers. This is just a decision to simplify the security and compactness of the consensus agreement. From the perspective of incentive design, and for formal modeling, it is very important to treat all participants equally when voting for blocks.

1 verifier = pledge 32 ETH, no more than this. Other chains are designed to address scalability issues by adopting a more centralized approach to verification. However, for Ethereum, this option is not considered.

2, simple, but the most expressive

Ethereum 2.0 strives for simplicity and compactness in its core definition and implementation goals. Fundamentally, it is an extensible, non-privileged platform for building decentralized applications.

There is no need to introduce application logic in Ethereum 2.0 for good reason. One can compare the Ethereum 2.0 system to a streamlined Linux kernel – it's not the operating system that determines the features or assumptions it contains, but the developers who build the application for the kernel.

The “hypothetical intention” approach is restrictive. The old Ethereum maxim says "we don't have any features", and the same idea applies to Ethereum 2.0.

3, to ensure safety

Ethereum 2.0's PoS model, also known as Casper the Friendly Finality Gadget, operates under a series of incentives designed to maintain a high level of activity and network engagement.

Ethereum 2.0 extends Casper to leverage its attributes to secure the fragmented blockchain network. That is, Ethereum 2.0 uses the concept of "chain termination threshold" to ensure that 1024 shards in the system share the same security pool as the beacon chain.

The core premise of PoS is that the verifier will be rewarded for completing the designated work as expected, or for a loss of quality deposit due to being offline, or severely penalized for a malicious breach of the agreement (the deposit is forfeited).

Although the premise is simple, the details determine success or failure. Once we realized that we not only had to consider the behavior of each verifier, but also to consider the entire Certifier's committee behavior, Casper's economics quickly became more complicated.

In general, an open question about the PoS chain is when to punish behavior and how to impose different penalties based on the severity of certain verifier actions. In other words, we need to find a comprehensive enough punitive measure to cover all edge cases while keeping it simple.

Since the protocol relies on verifier activity and relies on persistent observations of runtime, there may be situations where an honest verifier cannot participate. An honest verifier may be offline due to power outages, network instability, or other factors, but we need to clearly distinguish between penalties for offline penalties and malicious behavior.

Part of the design rationale for Ethereum 2.0 is that the attacker pays a huge price for any attempt to compromise the agreement. In other words, 51% of attacks that often occur in other chains should be costly on Ethereum 2.0, and even the results can be counterproductive.

That is to say, in a protocol with a clear finality, “reversing the finality” will make the attacker obvious to the honest verifier, which allows the community to coordinately perform soft forks to remove malicious The actor does not invalidate the attack.

Of course, even if the attack is successful and the community coordination is unsuccessful, if the attacker's sole purpose is to damage the system and cause huge losses, the integrity of the system will be reduced.

Another limitation of the PoS-based system is the verifier's dilemma, that is, the verifier in the system is lazy and simply trusts that others in the protocol are doing their work correctly, so they are not verified to be responsible for verification. Message.

Unless faced with significant penalties, these verifiers can save bandwidth or general computing requirements by not fulfilling their responsibilities. This problem can be alleviated by adding extremely powerful penalties and challenge mechanisms to lost or incorrectly signed information in the network.

5. Evidence incentives for Ethereum 2.0

The certifier incentives for Ethereum 2.0 are as follows:

1. The verifier is offline: Quadratic Leak

Ethereum 2.0 relies on the Byzantine fault tolerance threshold and must ensure that two-thirds of the verifiers in the network are honest participants. Penalties for certifiers who do not participate in the verification are called "invitales leaks."

If a chain has not been finalized for more than 4 epoch periods, the agreement will be as strict as possible for the verifier. In other words, the maximum expected reward will become 0, so the verifier needs to behave perfectly, otherwise it will face more punishment.

The size of the penalty is proportional to the time since the last chain was finalized to prevent the verifier from going offline.

The longer some certifiers go offline, the exponential increase in this penalty, which is called "Quadratic Leak." The reason for this kind of punishment is that it does not adversely affect short-term offline, but considering the expected real-world behavior, long-term offline will be very disadvantageous.

Such fines and lost funds will be destroyed and will not be redistributed to honest verifiers.

2, intentional malicious activities: fine

In the early proposal for the Ethereum PoS, malicious verifiers would suffer large-scale penalties, called slashing. Usually these mechanisms only discuss the penalties of individual malicious verifiers, not the seriousness of the verifier collusion. . If most of the verifiers collaborate with the malicious network, the network will be affected.

According to Byzantine fault tolerance, the penalty for malicious actors will be three times the number of verifiers who act maliciously during a specific time interval. This helps to punish large collaborative attacks and also prevents the generation of malicious certifier pools.

That is to say, performing an aggregate attack on the network is advantageous to malicious verifiers and disadvantageous to ordinary verifiers. Penalties are imposed through the reporting mechanism, which motivates the verifier to discover other certifiers' penalties for violations.

3. Verifier reward

According to Vitalik's Ethereum Serenity design principles, he outlined four specific components of the Certifier's basic rewards during each epoch period:

1. 1/4 of the rewards obtained by proof to determine the correct epoch checkpoint;

2. 1/4 of the rewards obtained by proof to determine the correct chain head;

3. 1/4 of the rewards obtained by demonstrating that the block can be quickly entered into the chain;

4. 1/4 of the rewards obtained by proof to determine the correct segmentation block;

There are additional rewards in addition to the basic rewards based on the number of validators participating. This extra reward is to motivate each verifier to do the right thing and create a collective drive for honest behavior. The release schedule for rewards should be consistent and straightforward. Adding more complexity will only make the system more error-prone and more difficult to understand from a macroeconomic perspective.

6. Separating program complexity from protocol complexity

“The Ethereum 2.0 roadmap is daunting” is not true, because Ethereum 2.0 is probably one of the most ambitious and multi-year projects that can get the best lessons from the industry and create an elegant The agreement to solve the extended trilemma, and the agreement will be able to run for a long time.

There has been a lot of discussion about how sharding can significantly reduce the developer experience. The rationale is that it is very difficult to strip the needs of application developers in the Ethereum 2.0 protocol internals because highly complex sharding systems need to interact with each other (cross-slice transactions).

At first glance, Ethereum 2.0 looks daunting from the outside, and it is still not clear how the smart contract is implemented in Ethereum 2.0. However, the facts are more subtle.

Application developers only need to know a small portion of the Ethereum 2.0 protocol. Ordinary smart contract developers do not need to know the internal structure of the verifier registry or beacon chain terminal gadget.

Therefore, phase 0 is completely removed from the application layer. Recent Phases 1 and 2 also present a very strong proposal for a higher degree of abstraction of the execution environment, making Ethereum 2.0 more powerful and concise.

In the worst case, the wallet/application developer needs to know some details about the cross-slice transaction to show real-time transaction settlement with some tips. Today's computer operating systems and internal devices are much more complex than they were 10 years ago, but most application developers don't need to understand hidden internal structures, even if they form a powerful computer system.

This "separation of concerns" is at the heart of good architectural design, and one can use it as a design invariant that should be kept in mind when building Ethereum 2.0.

Seven, build a real world computer

All in all, Ethereum is Turing-complete, which means it can run any type of imaginable code like today's computers, even though it's a slow, single-threaded computer.

Today's Ethereum is similar to the early weak processors. Running applications in Ethereum today is expensive because the agreement has established mechanisms to prevent the emergence of a commons tragedy that plagues public utilities.

Ethereum's vibrant developer community has never stopped improving its current network, both at the core and in layer 2. However, from a governance perspective, future planned upgrades can be problematic and painful.

If Ethereum 2.0 goes online a few years later, we feel limited and hope to build an Ethereum 3.0, which means that we failed in the core design of the former.

Upgradability should be incorporated into the agreement in a way that does not require risky hard forks. That is, once the system is in operation for a long time, the innovation of layer 1 should be minimal or close to zero.

We still have a long way to go, but we are cautiously reminding ourselves why we want to build this software, and we want to see where it goes in 10 years to write a more robust test that can stand the test of time. Code.

-END-

Translator's profile: Aile Bull, a special author of the Blockchain Learning Society.

Disclaimer: This article is the author's independent point of view, does not represent the position of the Blockchain Institute (Public Number), and does not constitute any investment advice or advice.

source: Https://www.tokendaily.co/blog/design-principles-of-ethereum-2-0

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Blockchain

Coinbase CEO: Almost every economic field is struggling, and Bitcoin is the currency people need at this moment

Editor's Note: This article has been deleted without changing the original intention of the author. Coinbase, a ...

Blockchain

Raise $130 million! Encrypted exchange INX will issue securities tokens via IPO

According to Coindesk's August 20 report, the incremental exchange startup INX Limited plans to raise $129.5 mil...

Blockchain

Look at IEO, the dilemma of markets, exchanges, project parties and investors

"IEO's projects are flying, do you want to follow?" Wei Dong entered the currency circle for more than...

Market

Latest Interview with Zhao Changpeng: Being "Under the Microscope" of Regulation, Market is Recovering in Bearish Period

On May 29th, Binance CEO Changpeng Zhao gave an interview to Bankless discussing his views on the current state of th...

Blockchain

I left the project side and went to the exchange.

In the first article of "Industry Reflection", we briefly reviewed the secondary market conditions of the f...

Blockchain

Hackers are getting smarter, with the largest number of exchange attacks ever in 2019

Source | bitcoinmagazine Translation | Huohuo Sauce Production | Blockchain Camp (ID: blockchain_camp) Currently, maj...