Ethereum 2.0 Coordinator: Block size will increase by 8 times, and 500 million transactions per day can be achieved.
Ethereum is trying to partially address the many sharding complications by simply increasing the block size, increasing the block size equivalent to about 1 MB every ten minutes to about 8 MB. Ethereum 2.0 Coordinator Danny Ryan publicly stated:
"Based on the latest research on secure block size and broadcast time, we are expanding the block, so the system's data availability is still > 1MB / s, so you can still get something similar when doing things like ZKrollup and OVM. Increased scalability."
ZK rollups is a hybrid expansion method that combines chain security with a second-tier network through smart contracts and zero-knowledge methods.
OVM is Plasma's Optimistic virtual machine, both of which are above the Ethereum public blockchain.
- Is Dfinity not far from the line? New release of the Motoko programming language and the Canister SDK
- Blockchain concept share differentiation new landing company into a new favorite
- 21.6% of BTCs did not move within five years, a record high, and new arrivals led to a decline last month.
Quite simply, the blockchain itself will be fragmented with each shard, just like the current Ethereum network.
According to the original design, Ethereum 2.0 should have 1024 shards, which means that the capacity of Ethereum 2.0 should be the current network multiplied by 1024. Therefore, in theory, Ethereum 2.0 can handle 1 billion transactions per day, but currently only 1 million.
However, this design has undergone major changes, and now only 64 shards are recommended, and these shards are more merged together.
However, the data ceiling will increase by a factor of eight. Therefore, according to the theoretical calculation, it can achieve 512 times of 1 million transactions, and the transaction volume of one day can reach about 500 million times. Ryan said:
“To achieve scalability similar to the previous proposal, the target tile size is increasing 16 times from 16kB to 128kB. This provides the system with data availability greater than 1 MB / s, which can be combined with the second layer (L2) The solutions work well together… The network security of these larger fragmented blocks is demonstrated by the latest experimental research on the existing Ethereum network."
The research he refers to seems to focus only on the block rate of the unblock, without considering problems such as synchronization time, or because 64 shards need to share a large amount of data, so the design may become a bottleneck elsewhere.
It is unclear, based on this research, why Ethereum developers will not change the default value of the current network to a higher 8 times, expecting miners to follow such defaults through inertia or increased game mechanics.
However, it is clear that it is difficult to run counter to the way that Nakamoto recommends expansion. If you focus on actually expanding the network by pruning network data and other things, you can get a lot of efficiency and data compression. All of this will be a complete loop.
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- Comply with the trend of the times and push the development of blockchain industry norms
- DCEP VS Libra, the outpost of the digital economy under the new "big game"
- Blockchain recruitment status: average salary of 16317 yuan, Shenzhen leading talent demand
- Vitalik Buterin: Not very optimistic about the alliance chain, 5G technology can improve blockchain scalability
- Wu Jihan: The Jank group’s silence insider is untrue, and some people take the opportunity to intensify the contradiction between the two.
- Record the development of Bitcoin, 11 years, continue
- Ripple Xpring: How to achieve separation of blockchain scalability and interoperability?