I understand all the design and implementation of Ethereum 2.0

According to Trustnodes' May 22 report, Ethereum co-founder Vitalik Buterin said that developers have solved all the research breakthroughs required for Ethereum 2.0, and now only the implementation is realized. He said:

“Actually, we have achieved all the research breakthroughs needed to fully implement Ethereum 2.0. This has been going on for about a year.”


Image source: visualhunt

This bold statement was made in the presence of Ethereum 2.0 Phase 0 and three test sites. The next phase will be a cross-client test network, which is expected to begin this summer.

Phase 0 is just staking, which is a virtual main network between the test network and the fully functional blockchain.

Some features such as storage sharding will be added in phase 1 next year. Phase 2 is a practical and comprehensive start that is expected to be achieved within two years.

The complete design is a somewhat complex system, and we think it connects different node groups by individually running the fragment nodes of interest.

Shards basically form the current Ethereum network, but assume there are 1000 nodes, then we have Network B or Fragment B, which has its own 1000 nodes, all running on the same basic code, Therefore belong to the same blockchain. The number of fragments is hundreds.

Since the two systems are essentially different, integrating them together is a breakthrough beyond fragmentation, connecting private and public blockchains, sidechains and all other areas.

Ethereum 2.0 design

The way it works can be explained first by Buterin at a technical level, and then we will give a simpler interpretation. Buterin said this:

“The general flow of cross-sliced ​​transactions (we take the transmission of 5ETH as an example) is this:

Destroy 5eth on slice A, creating a receipt (such as a Merkle branch submitted to the state root of the block) containing: (1) target fragment, (2) destination address, (3) value (5eth), (4) Unique ID.

In this process, once slice B detects the state root of slice A, a Merkle branch that proves the receipt is submitted to slice B. If the Merkle branch verifies that the receipt has not been used, then 5 ETH is generated and forwarded to the recipient.

In order to prevent double flowers, we need to keep track of which receipts have been claimed in the store. In order to improve efficiency, you need to assign a sequence ID to the receipt. Specifically, in each target fragment, we store the next serial number for each destination fragment. When source fragment A and target fragment B create a new receipt, the serial number is fragmented A. Fragment B's next sequence number (the next sequence number is incremented, so it will not be reused). This means that in each target shard, we only need to track the SHARD_COUNT bit field of each source shard to prevent double spending, which means that only one storage bit is needed for each shard transaction. In other words, basically you lock Eth in a smart contract on slice A, show the evidence you are doing in slice B, and then get Eth on slice B.

To prevent double-flowering, developers basically use something that sounds like nounce, which is to give it a number and then add numbers indefinitely.

This is at the protocol level, so you know that no one is cheating because you are running both the fragment A and the fragment B nodes. The node validation rule looks at the block header or the receipt that Buterin calls. If there are any errors, the node will tell you.

Here we use the majority of the nodes, but for you, the two nodes may be running on a client, although this is a very early implementation detail in Phase 2.

As you can imagine, it is difficult to "interact" the smart contract on slice A with the smart contract on another slice B, for example, the fragment B has Cryptokitties DNA.

We don't know how you transfer the DNA to slice A and let Cryptokitties run on slice A, which is generated on slice B.

One way is through a central coordinator, but this also has its own problems. As mentioned above, when moving Eth, it is point-to-point.

Breakthrough or limit improvement?

Although some people say that coding is the most difficult part, implementing this stage may be the most difficult part.

In addition, if you run nodes with different shards at the same time, this is equivalent to increasing the block size.

The difference here is that you don't have to run different shards at the same time, or you can set one full node for one shard and one lighter node for another shard.

However, at this point, you may be in an endless debate with Bitcoiner, who may say that a full node is a node that runs all the shards, and few people can afford to do so.

Ethereum 1.0 is here, and it is a complex design in itself, with the premise of deleting data by removing outdated smart contracts and thorough deletion.

The first one may be easier than the second one. Bitcoiners don't know much about Ethereum smart contracts, so we don't know what they will say.

For pruning, you may need to set a checkpoint, which is a bit like a new creation block. For Ethereum, your node doesn't have to start from 2015, but it can start in 2017. Then other data is discarded and may be uploaded somewhere as an archive.

The difficulty here is who sets the checkpoint. If it is a person or a group of people, there may be many problems, but with staking, it is possible to do this in a decentralized way.

Expansion competition

A Bitcoin developer has commented that Ethereum is pursuing all the ideas rejected in Bitcoin Core.

Whether this is a good thing or a bad thing depends on your opinion and on how it actually works, because Bitcoin Core does not fully follow these ideas to draw the right conclusion.

In addition, Bitcoin cannot continue to run for a long enough time in its current state, because although there are not many 1MB every 10 minutes, if we take it out, after 10 years, its total amount will accumulate a lot. .

This process is very slow, and in some ways everything is fine now. For example, the current blockchain size is 220 GB. After ten years, it will increase to 520GB. Considering that bitcoin runs slightly above 1MB, its blockchain size should be 1TB by then. 1TB may not be a problem, but it will not decrease, it will only increase. So in the end it will become 10TB, or 100TB.

Obviously, you might say that we will all die in the end, but if you solve this problem, you will solve the problem of scalability.

The Lightning Network may take some time. Ethereum has Plasma, state channels, and all other features, but they don't fully address the growing historical roots.

For Bitcoin, developers are currently trying to compress data as much as possible. You know, to some extent, time is there, but time is good for the competition, because having more capacity while maintaining decentralization is obviously a very useful thing.

Whether Ethereum can solve the expansion problem remains to be seen. Now that the plan has been worked out, it is clear that all the problems have been identified. The basic framework will be ready later this year, and next year will be to prepare bricks and all other materials, then install windows and roofs in 2021, and finally elegantly renovated. In this way, a beautiful Ethereum house built by about 100 protocol developers was built.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!


Was this article helpful?

93 out of 132 found this helpful

Discover more


Analyzing the first zk-fraud-proof system, the combination of Optimistic Rollup and ZKP

Risc Zero and Layer N have jointly developed the first ZK fraud-proof system, which enhances Optimism's fraud-proof s...


Overview of the selected projects for the second phase of Outlier Ventures' Zero Knowledge Base Camp

Introduction to the selected projects of Outlier Ventures Zero Knowledge Base Camp Phase 2


Introduction to Zero Knowledge Proofs

Cryptocurrency researcher oskarth wrote an easily understandable article explaining what zero-knowledge proofs are, w...


What will "ZKP + Bitcoin" bring? - Bing Ventures

More and more teams are adopting zero-knowledge proof technology in blockchain infrastructure and dApps. However, mos...


The integration of blockchain and AI is a natural demand analysis of relevant use cases.

Lanhu's Notes, a cryptocurrency researcher, believes that AI has a natural demand for blockchain, as AI needs blockch...


Stanford Blockchain Week Highlights MEV, L2, ZKP, On-chain Order Book...

DWF Labs investment department member Fiona summarized the key points of Stanford Blockchain Week that she believed, ...