Introduction | Eth2 Research Team AMA: Sharding, Scalability, Composability

Editor's Note: The Ethereum 2.0 research team conducted an AMA (questionable) event on Reddit for 12 hours on July 15, 2019. This is an excerpt from the Q&A. Q: How does the Ether token move from Eth1 to Eth2 in an existing design?

Carl: It depends on what you mean by "Ether token".

The verifier can send 32 eth to the margin contract. These ETHs will be transferred to the beacon chain, and they will be qualified as verifiers. As for the users who simply want to move ETH from Eth1 to Eth2, they are not sure yet. Do, however, there may be a dedicated chain bridge (otherwise it will still be done through a margin contract);

As for the ERC20/721 token, it is too early to discuss this issue. If Eth2 can have an Eth1 execution engine, it's perfect, without any inconvenience; but even if it doesn't, the ERC20 token can be transferred by copying its state root.

Q: What is the migration plan for Eth1 to Eth2 now?

Vitalik: The current plan is to fold Eth1 into Eth2 as an execution environment. In practice, this means that we need to do a hard fork on eth1 and re-adjust the gas consumption of some operations (for example, the gas consumption for reading the operation code for storing or reading the account should be increased to 2000~10000). Then from a certain point in time ("flag block height"), the state root of Eth1 will be moved to the Eth2 system (or maybe some one-time processing in the Eth1 state for optimization, such as replacing with a binary tree) The hexadecimal Patricia tree; and then Eth1 becomes part of Eth2, and each application can run as old.

I do think that the increase in gas consumption for store read/account read opcodes is something that contract developers should be wary of or plan for (the affected opcodes are basically those that have been increased in the Tangerine Whistle). Those; gas consumption must rise at least an order of magnitude).

The reason for this change is that these opcodes greatly increase the size of the Merkel certificate required for the stateless verification block. In the worst case, the current Merkel certification scheme will be larger than 100 MB. Gas re-pricing, plus prefix tree optimization, plus the amount of data charged to the contract being read, allows us to reduce the size of Merkel's proof to an acceptable level.

Q: Does Phase 0 lead to scalability improvements? How many transactions can Ethereum process per second with Sharding?

Vitalik: Phase 0 allows a light client to pass you a hash of the eth1 chain in a very lightweight way (and of course, because you pay about 200kb every 6 minutes to track the committee, to Phase 1 The time can be reduced to 200kb per day). This feature can be used, for example, to make light clients built into the browser work more efficiently. This is also a type of scalability enhancement that, in my opinion, has not been fully discussed.

Carl: The goal of Phase 0 is to track the status of the verifier and generate randomness. It doesn't make much sense to ask for scalability. As for TPS, the answer is not yet clear. For a rough calculation, if each shard has the same throughput as Eth1.x, TPS can be pushed to 16*1024 = 16,384 (assuming there is no cross-sharding transaction).

In addition, the above number is still somewhat unreliable because Eth2 is designed to work with Layer-2 solutions such as Rollup and OVM, which provide higher throughput.

Q: After implementing Sharding, is there a sudden 1024 shards, or is there only a few shards at the beginning, but the number of shards will increase with the increase in usage? Because releasing so many shards from the beginning will obviously cause space waste.

A: All 1024 shards will be released at once. A gradual increase in fragmentation mode (possibly) brings unnecessary complexity. Gas Price with a large amount of unused space will be lower, which will attract more users.

Q: 1024 shards, more than 130,000 certifiers… So what if you don't have enough certifiers before the shard chain starts?

Carl: There are 1024 shards and 128 certifiers per committee, so at least 131,072 certifiers are required to provide crosslinks per slot. If the number of certifiers is less than this number, then some shards will be skipped, guaranteeing 128 certifiers per committee.

Danny: The system can handle as low as 64 certifiers naturally; security is clearly reduced, but the protocol is technically functional.

Vitalik: Technically, even if there is only one verifier, the system is still "working." However, when there are less than 131,072 verifiers, the system's attributes will gradually degrade as the number of verifiers decreases.

Q: My biggest concern is that ETH 2.0 will break the composability. Isn't there a situation where most dApps are developed on the same shard (such as the shard where the MakerDAO contract is located) (so they can use Dai)?

Justin: The composability between the pieces is undoubtedly undetermined, but we have reason to remain optimistic:

Fragments are designed to be homogeneity (unlike, for example, Polkadot or Cosmos) to facilitate cross-sliced ​​communication.

There are some design patterns that abstract the boundaries between slices. For example, we can think of slice 0 and slice 1 as the underlying data availability layer, providing data to an execution engine, but this engine requires more bandwidth. These design patterns are easier to exploit in a programmable execution engine.

Fragmentation is designed to be "fast ideal deterministic" friendly, since fragmentation attestation can be somewhat analogous to block validation in Eth1. That is to say, in fact, due to the fast probability certainty of a single chip, the slice system can be represented as a blockchain in a logical sense.

Opportunities can also be found on the UI layer to abstract the fragment boundaries.

Original link:


Author: Eth2 Research Team

Translation: Ajian

(This article is from the EthFans of Ethereum fans, and it is strictly forbidden to reprint without the permission of the author.