Perspectives | Paradigm shift: How can the Eth2.0's fragmentation chain serve the contract on Eth1.0?

Editor's Note: This article is a long Twitter post by Casey Detrio on June 1st, about the migration of Eth1.0 and an idea that he thinks is very genius: using a bridge to let the contract on Eth1.0 coordinate the fragmentation chain Let the latter's throughput be used by the former. The migration of Eth1.0 is actually a concern for all Ethereum community members, and the thoughtful solution clearly needs further discussion.

Casey Detrio:

Yesterday I talked to @josephch for a long time. He asked a big but terrible question: as everyone's attention turned to the release of Eth2.0, in the long run, the Eth1.0 chain and all the chains How do contracts and their users adapt to the Eth2.0 (Serenity) roadmap?

We were afraid of this problem because we couldn't give a good answer. But this time, we asked a new question and used a new narrative, so what I got from it was more inspiration than fear. Because it actually contains a shift in thinking mode, it will only happen gradually at first, and (I hope) it will collectively erupt at the same time in the future.

The inspiration for me is that instead of asking how the Eth1.0 contract will be migrated to the Eth2.0 segment (break), it is better to ask how the Eth2.0 segment will serve the contract on Eth1.0!

Although he also knows that this is a tricky issue, I think Joseph must have been inspired by it because he is aware of this shift. This shift towards a new understanding begins with an innovation around Eth2: Eith1 to Eth2's "Availability Bridge."

The idea of ​​a bridge has been proposed for a few months, but it was not considered an important innovation at the time (at least I did not see it). Looking back at history, we can see why it is underestimated. This story begins with the turning point in June 2018.

In the route transition in June 2018, people abandoned the deployment of PoS on the main network, but chose to release Eth2.0 (PoS + Sharding) as a separate new chain (beacon chain + fragment chain), which made us dizzy. . Being pushed into a new universe, we have to adjust ourselves around Eth2.0, and our vision has become completely centered on Eth2.0.

In the 2018 route transition, a phased Eth 2.0 concept was proposed; since then, the availability bridge has become the most important thing, although this idea is completely unobtrusive. We can prove this from an interesting side of the Eth 2.0 phased vision.

The interesting side of this is that the slice chain in Phase1 is completely useless, because the verifier is not the 0 byte in the block (ie, the data block) packed by the slice chain. It is obviously stupid to start 100 slice chains full of 0 bytes.

The problem here is that in order to reliably package the user's transaction (non-zero byte data) into the slice chain block, we must first determine the basic principle of Phase 2 work. And this is something we have not yet confirmed.

The genius of the availability bridge is that it uses the Eth 1.0 contract to pay for the tiled block proponent, asking the latter to include useful non-zero data in the data block.

What I am saying is that if we stare at Phase2 as the ultimate goal, we can't see the importance of the eth2-phase1 bridge to the Eth1.0 chain: when waiting for the Phase 2 release, the bridge looks like a certain device. The whistle of the look.

And I am realizing that this bridge is a new idea that has the potential to create a new roadmap and a traditional vision of the unit: Eth2.0 = Eth1.0 + PoS + segmentation chain

This continuum no longer treats Eth2.0/shards as an independent new system, and no longer arranges an eternal, violent, and frustrating migration of Eth1.0's dApp; bridges allow us to The segmentation chain is an extension of Eth1.0, and the segmentation chain is built on the potential energy of the Eth1.0 ecosystem, and continues to move forward with this potential.

Maybe we can widen the bridge and turn it into an 8-lane superpass, connecting the Eth1.0 chain to the million-ton availability and execution engine to get parallel throughput of 1000 or even 10,000 sliced ​​chains.

The Eth1.0 chain can be the “imperial capital”/the hottest slice chain, where contracts can initiate instant simultaneous calls (calling contracts deployed in other “city”) (and of course higher fees).

dApp features that are not strictly related to dApp interactions can be processed in parallel and can be migrated to "shards" rather than to the chain (that is, executed on shards instead of on the main chain).

I suspect that other developers and researchers are also generating similar ideas in the iterative eth1-eth2 bridge. I can't wait to see their proposal.


This… doesn't feel make sense. The reasons are as follows:

In any case, we need to migrate from PoW to PoS, and the beacon chain will be the PoS central chain. So, if "Eth1.0" is going to become a center that everyone is tracking, it has to become a state root and a data space on the beacon chain. So we have to pay for this migration cost anyway.

Then, we have @realLedgerwatch working on the Eth1.0 stateless client. In order to prevent the threshold of running beacon chain nodes from being too high, the verification of eth1.0 must be based on stateless clients. What we have today is an execution environment. For some reason, each beacon chain node must verify the data. This is not required for security, we can downgrade to a committee (and nodes of interest) to validate the data.

We have now defended every step of turning the Eth1.0 engine into an execution environment. Maybe it will be an execution environment that people use first. I can also make it more prominent by adding basic cross-sharding support if needed.

Therefore, we can reasonably say that the eth1.0 system has to be transformed from one chain to an execution environment. If done right, this can be done seamlessly without breaking the application. Therefore, the migration cost can also be reduced to zero for dApp.

Ok, I take back the phrase "not too make sense". In fact, this is not a debate of “A vs B”. The model of Eth1.0 as the execution environment is a combination of two methods (eth2 replaces eth1 around eth1 vs eth2). After synthesis, we can get the advantages of both parties. Specifically, it is:

  • Migration cost close to 0
  • Developing dApp on eth1 becomes a long-term, reliable strategy, and since eth2 is fully present, they can start using other slice chains.
  • Will not double the consensus complexity in the long run
  • Leave PoW quickly
  • Lower design costs because we can put our first execution environment of primary concern in the eth1.0 stateless client
  • At the same time, interested teams can focus on creating a competing execution environment with better design, and perhaps most activities can eventually be migrated to a better execution environment.
  • And, most importantly, eth1.0 and the above contract "look" like a first-class citizen in the eth2.0 system (Translator's Note: in software design refers to the first-level operational object), and Non-second class citizen

Casey Detrio:

When you say "'this' is not make sense", I don't know what your "this" refers to. Of course, the question itself "why the eth2.0's shard chain can help the existing eth1.0 contract" certainly makes sense. I mean, this is the question the usability bridge itself is trying to answer, isn't it?

So, maybe what you mean is that the iteration of the availability bridge does not make sense? You seem to be advocating that turning the eth1.0 chain into a shard (or an execution environment) is a better approach. Some eth2.0 researchers suspect that this is really feasible, I am still optimistic.

In addition, your article (4) states that “except for some reasons, each beacon chain node must verify the data”, I am not sure what you are talking about. I made a vague and developed proposal: the idea of ​​iterative usability bridges. I did not propose that each beacon chain node should verify the data (regardless of what this means).

Anyway, you have to admit that this is a narrative change. The old story is: "We may never be able to integrate eth1.0 into eth2.0. We are still ready to migrate all contracts, and we need to completely rewrite/redesign cross-contract communication services."

The new story is: "We can make a usability bridge for eth1.0 before the release of Phase2. After Phase 2 is released, a feasible way is to transform eth1.0 into a slice chain/execution environment in eth2.0. The final migration has always been on the table as a distant possibility, but it is no longer the mainstream narrative.

I think the availability bridge (and its iteration) is another path that can bear fruit (and is not mutually exclusive with the migration of eth1.0). Whether the bridge 1.0 and its iterative version are useful, or whether you have to make a major sacrifice, is the problem I want to discuss.

To be clear, my so-called "availability bridge" refers to the "implementation of eth2.0 in the eth1.0 light client" proposed by @VitalikButerin, instead of the "development to Phase1" that I once wrote. The latter talked about the availability bridge, but didn't iterate it, and it was just an eth2.0 proposal for itself.

Original link: Author: Casey Detrio Translation: A Sword

(This article is from the EthFans of Ethereum fans, and it is strictly forbidden to reprint without the permission of the author.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!


Was this article helpful?

93 out of 132 found this helpful

Discover more


Stanford Blockchain Week Highlights MEV, L2, ZKP, On-chain Order Book...

DWF Labs investment department member Fiona summarized the key points of Stanford Blockchain Week that she believed, ...


What will "ZKP + Bitcoin" bring? - Bing Ventures

More and more teams are adopting zero-knowledge proof technology in blockchain infrastructure and dApps. However, mos...


Origin and Development of ZKP: From the 1980s to Present

This article explores the origins and development of ZKP over the years.


Overview of the selected projects for the second phase of Outlier Ventures' Zero Knowledge Base Camp

Introduction to the selected projects of Outlier Ventures Zero Knowledge Base Camp Phase 2


The integration of blockchain and AI is a natural demand analysis of relevant use cases.

Lanhu's Notes, a cryptocurrency researcher, believes that AI has a natural demand for blockchain, as AI needs blockch...


Analyzing the first zk-fraud-proof system, the combination of Optimistic Rollup and ZKP

Risc Zero and Layer N have jointly developed the first ZK fraud-proof system, which enhances Optimism's fraud-proof s...