What is Ethereum 2.0? What changes will happen to the ecology of Ethereum 2.0? What are the new points in technology? Who will BCH and ETC become the data layer of Ethereum? How should the future of Ethereum go?
In the face of all these questions, from Tuesday to Thursday night, BaKbit's first community interview column, SheKnows, joined hands with ETC to comprehensively analyze the “post-Ethernet era” (Ethereum 2.0).
Following the sharing of the “ETH 2.0 Ecology” theme on Tuesday night, the theme sharing brought on Wednesday night was “From 1.0 to 2.0 – look at the technology trend of the Era era”, and the sharing guests were: Yang Zhen, the translator of the Ethereum Yellow Book Zhang Weijia, head of the Ethereum Enterprise Alliance in China, and A Jian, the head of the Ethereum fan community, hosted a free and happy.
- Ethereum 2.0 design principles
- Ethereum 2.0 terminology reveals why a beacon chain is needed
- Beacon Chain Contract: A New Way to Deploy Dapps on Ethereum 2.0
- Beacon chain: a new starting point for Ethereum 2.0
- Ethereum 2.0 audit report announced next week, giving green light to multi-client testnet
- Technical Guide | Ethereum 2.0 Phase 0 V0.8.0 Technical Specifications (1)
The key points are as follows:
1) There is no complicated state in the beacon chain from beginning to end, but it is the center of the subsequent whole system. – Ajian
2) Ethereum 2.0 nodes only process transactions for specific shards, which enables parallel processing, greatly improving transaction throughput and scalability. ——Zhang Weijia
3) Each certifier needs to pledge 32 eth, and the Beacon chain deposit contract will lock at least 2 million eths. ——Yang Zhen
4) Casper FFG The biggest advantage compared to other PoS consensus mechanisms is its liveness. – Ajian
5) Punishment should be established in DAO (Community Automatic Management System), and it will make sense after having a penalty consensus. ——Zhang Weijia
6) VDF is an open source hardware (ASIC) design designed to be used as a final result for recalculating random numbers generated by RANDAO. It is used in conjunction with RANDAO, not an alternative. . ——Yang Zhen
7) All transactions processed by Eth 2.0's fragmentation system are not cross-chip transactions. The overall processing speed can be increased by 1000 times in theory without relying on other fragmentation data, but the overall performance will increase with the proportion of cross-chip transactions. And significantly reduced. ——Yang Zhen
8) The eWASM stage is difficult and risky, and is more suitable for large enterprises. ——Zhang Weijia
9) Excluding those big noises, you will find that the consensus on Ethereum is stronger than many people think. – Ajian
10) Whether Eth 2.0 can be the first successful PoS public chain system remains my biggest concern. ——Yang Zhen
The full version is as follows:
Ethereum 2.0 Science: From phrase0 to phrase2
Moderator : Please ask Teacher A Jian to share the information of phrase 0 Beacon chain.
The beacon chain is the main chain in Eth2. It will assume the responsibility of maintaining the set of verifiers, assigning verifiers to the fragments (requesting the verifier to propose blocks or submitting attestation), and storing the attestation of the slice chain.
Unlike Eth1's main chain, the beacon chain uses the PoS mechanism to reach consensus, specifically Casper + Last Message Driven GHOST.
The beacon chain does not have a complex state from start to finish, but it is the center of the subsequent system .
In the upcoming Phase 0, users on Eth1 only need to deposit 32ETH in the margin contract to obtain the verifier's qualification and participate in the beaconing process. In addition, in this two-chain parallel phase, developers are now planning to create a two-way coupling of the PoW backbone and the beacon chain, using a beacon chain to finalize the blocks on the PoW chain. Moderator : Please let Zhang Weijia explain the knowledge of the phrase 1 Shard chain.
Ethereum, like other blockchains, has a ternary paradox problem, that is, security, decentralization, and scalability are not compatible.
Ethereum 2.0 uses the concept of blockchain sharding to divide the entire state of the network into a series of partitions (1024) called shards, each containing its own independent state and transaction history. The sharding scheme on Ethereum can put all addresses starting with 0x00 into one shard, all addresses starting with 0x01 into another shard, and so on.
In the sharding system, the verification node is randomly assigned the right to create a tiling block. During each time slot (e.g., a 6 second time period), for each slice k, a random verification node is selected, thereby having the right to create a block on "shard k". For each k slice, another set of verification nodes will be selected as the prover. The title of the block and the signature of the proof node can be published on the "main chain" (also known as the beacon chain).
Ethereum 2.0 nodes only process transactions for specific shards, which enables parallel processing, greatly improving transaction throughput and scalability.
Moderator: Please tell Mr. Yang Zhen about the knowledge of Ethereum 2.0 phrase 2 VM.
Eth 2.0's phrase 2 is still in the R&D phase, and there is not much to talk about. A lot of content is still under discussion, not the final conclusion. You can go to this address to track the latest news: https://hackmd.io/UzysWse1Th240HELswKqVA?view
For VMs only, it is certain that Eth 2.0 will use Ewasm as the execution code standard . That is to say, the smart contract in Eth 2.0, whether written in Solidity or written in Vyper, will eventually be compiled into Ewasm bytecode to be virtualized in Ewasm. Executed in the machine.
The specific way is to connect the EVM and the Ethereum client through an ABI (Application Binary Interface) called EVMC (see the ethereum/evmone project). This is a decoupled design that decouples the execution of smart contracts from system data processing and consensus. Ewasm's virtual machine is maintained by a project called ewasm/hera.
Beacon Chain Verifier: At least 2 million ETHs are locked
Moderator: According to information from Ethereum 2.0 researcher Justin Drake, the founding block of the Beacon chain (MIN_GENESIS_TIME) will be born as early as January 3, 2020 (but is likely to be postponed). Prior to this, developers will provide a deposit contract, and participants who want to be a beacon chain verifier can forward 32 ETH to the contract address in advance, at this stage, from a technical point of view, it is certain that it will happen. What is it, and what might happen?
According to the news I have seen, this deposit contract may be released to the current main network in September or October this year, and then you can start accepting the registration of the certifier. This has nothing to do with the fork, just a contract on the main network. The code for this contract is already available and can be found at https://github.com/ethereum/eth2.0-specs/blob/dev/deposit_contract/contracts/validator_registration.v.py. According to the current design, this contract needs to receive at least 65536 different addresses as verifiers.
Why is this number? Because Eth 2.0 will support 1024 sharded chains as planned, then 65,536 registered certifiers can guarantee at least 64 certifiers on each shard. According to each certifier's need to pledge 32 eth, this contract will lock at least 2 million eth .
So this is definitely a relatively long process and should last for months. No one knows what will happen during this period; but the whole process will be very interesting. Moderator: Just now we talked about the 32 ETH deposit value, which is a fixed value. Why is this value set? Moreover, in phrase 0, the currency transferred from Ethereum 1.0 to Ethereum 2.0 will be one-way, that is, it can only be transferred and cannot be transferred back. What kind of consideration is this setting?
This 32 ETH deposit value is actually the threshold for Ethereum 2.0 to participate in the equity certification node. From the perspective of decentralization, the lower the threshold, the more nodes that participate in the proof of equity, the more favorable to decentralization and increase the security brought about by increasing randomness. On the other hand, if the threshold is too low, the information transfer burden and the finalization time will be lengthened due to too many nodes. From an article published by Ethereum, 32 ETH is a proof of node deposit value acceptable to both technology and business.
Ethereum 2.0 is not a one-time update replacement 1.0, but is done in several stages, including beacon chain, slice chain, eWASM virtual machine and so on. Before the eWASM (execution layer) update, the Ethereum 1.0 and 2.0 chains will coexist . The lock of the Ethereum in the 1.0 chain and one-way flow to 2.0 provides a viable coin transfer mechanism suitable for the coexistence of new and old chains.
Advantages and disadvantages of the Casper consensus mechanism
Moderator: We know that Beacon chain is the Casper consensus mechanism, and Casper is divided into Casper FFG and CBC Casper mechanism. In phrase 0, Ethereum uses the Casper FFG consensus mechanism. What are the advantages or disadvantages of this mechanism over other PoS or other types of consensus mechanisms?
As far as I know, the biggest advantage of Casper FFG over other PoS consensus mechanisms is its liveness . In a consensus algorithm like Tendermint, if a malicious verifier controls a third of the equity, the entire chain can be stuck.
However, in Casper FFG, since the operation of RANDAO and VDF is not affected by the chain operation, it can be guaranteed that even if a malicious verifier is a majority in a certifier committee, the certifier shuffling and the issuing rights are issued as usual. Will not be stuck.
Insufficient course is the need to consider more details. For example, the anti-deflection property generated by random numbers.
Talking about punitive measures: With punishment, consensus makes sense
Moderator: Ok, let's discuss the penalties of the Beacon chain. At present, Ethereum 2.0 has two kinds of punishments, one is for non-participating verifiers, and the other is for malicious. Acters, what are they specifically?
As far as I know, the "certifiers who do not participate in the verification" in the question are generally worth the drop-off/network failure, which is a short-lived address that cannot be worked. This penalty is relatively light, only when many verifiers (more than 1) /3) At the same time, when the line is dropped for a long time, the beacon chain cannot be finally confirmed, and it will be punished more, and this penalty will increase over time, and it will not be too high at first. Malicious actors (such as repeated voting) will directly receive larger penalties, such as several eths that are directly punished without pledge.
There are two kinds of punishments, one is Slashing, the “penalty”, which is aimed at the double signature or wrong sign verification of the competition block; the minimum strength of the penalty is 1eth, but the strength will be the number of verifiers who have been punished in the near future. The linear rise, if nearly one-third of the verifiers are fined in the near future, you will be spiked – no penalty.
The second is inactivity leak, "lazy punishment", that is, when a large number of verifiers are offline, so that the beacon chain can not correctly finalize the block, the offline verifier will be punished, and the penalty will also increase with the offline time. And constantly increasing, up to 60.8% .
I understand that it is a punishment for node inaction and the other is punishment for malicious behavior. I think this kind of punishment should be established in DAO (Community Automatic Management System), and it makes sense after having a punishment consensus.
Random question: the agreement is not an alternative
Moderator: Regarding the PoS protocol, randomness is very important. At present, Beacon chain chooses RANDAO structure to achieve randomness. In the future, developers plan to use verifiable delay function (VDF). Please tell Mr. Yang Zhen to you. The case of two random sampling techniques, RANDAO and Verifiable Delay Function (VDF).
RANDAO is the meaning of random DAO, which is the abbreviation of Decentralized Autonomous Organization, also known as the “decentralized autonomous organization”. This approach is a smart contract that allows multiple people to work together to generate random numbers. It uses the so-called commit-reveal design pattern in smart contract development. The contract should be reusable, that is, it should be a cyclic state machine; each time a random number is generated, the execution process is divided into three phases: commit, reveal, and actual calculation.
The first stage is that all participants submit a hash value of their respective arbitrarily specified input data. When all the people have submitted or the specified time is reached, they enter the second stage, and all the authors submit the hash value before submitting. Raw data.
If the data submitted twice matches, it is used as a valid calculation input. After all the people have revealed or reached the specified time, the contract will automatically calculate the actual random number based on the valid calculation input (which should be more than a fixed minimum number). Then the contract returns to the state of the first stage, and the random number can be restarted.
The risk of the RANDAO model is that in the revealing phase, the last person who submitted the original data can see the original data of all others before (because the additional data of all transactions is public), the final result can be predicted.
VDF is designed as an open source hardware (ASIC) for the purpose of recalculating random numbers generated by RANDAO to obtain the final result. It is used together with RANDAO, not an alternative. relationship.
That is to say, in Eth 2.0, a member of all participating committees will first generate a random number based on RANDAO, and then VDF will use this random number as input to calculate the final random number.
The calculation process of VDF is a computational process that cannot be accelerated by parallel computing. This also avoids the possibility of profiting from the results in parallel by means of parallel computing . The current delay time (that is, the difficulty of the calculation) is set to 102 minutes, which means that it takes 102 minutes to get the final random number. Mako:
The VDF mining machine research, currently the Ethereum Foundation and filecoin are doing together, there seems to be a lot of progress.
1024 shards: security and performance
Moderator: Just now, Teacher Zhang Weijia mentioned that the number of shards in Ethereum has been set to 1024. These shards will be exchanged by contract, so why is it set to 1024, not more? Not less? What is the trade-off? In addition, this also means that there will be 131072 (1024 * 128) certifiers (optimal), and if there are not enough certifiers at that time, what will happen?
Hahaha. I don't know the amount that is determined. It may be necessary to have a power of 2. In addition, the increase rate will be dynamically adjusted to motivate the certifier to join.
Programmers like 1024.
Binary is definitely the second power of the second. 1024 is not fixed, and if the verification node is not enough, Vitalik can also let the Foundation run some nodes.
It may indeed be a trade-off between security and performance. There should not be too few verifiers, or fragmentation will be easily broken.
The great gods in the community explained this problem, saying that for safety reasons, the number of 1024 shards is determined based on the design of at least 128 certifiers per shard. I am not an algorithm expert and can't simply explain why 128 certifiers are safe. If you are interested, you can go to the V God's thesis: https://vitalik.ca/files/Ithaca201807_Sharding.pdf
For the Eth 2.0 sharding system, because each shard is a separate address space, if all transactions processed in the system are not cross-chip transactions, that is, data that does not depend on other shards, then the overall processing The speed theory can indeed be increased by a factor of 1000 ; but this overall performance is significantly reduced as the proportion of cross-chip transactions increases. The processing of cross-chip transactions is the most complex part of the fragmentation system, and there are currently no established technical solutions.
Finally, the problem with the number of certifiers is not enough. The simplest solution is to let each shard queue out of the block to ensure the security of the shards, but the overall performance of the network will be reduced accordingly. In addition, all shards can be allowed. At the same time, the block is released, but the certifier of each shard is reduced, which will definitely reduce the security of each shard. This trade-off plan is also under continuous discussion and has not been finalized. Moderator: According to the data, the beacon chain and the shard are periodically communicated through a design called Crosslink. Can you explain this design in a popular way?
Popularity is definitely not done, I try to explain it as simply as possible. The crosslink data contains the following fields: the slice number, the parent block status root on the beacon chain, the start epoch, the end epoch, and the data state root on the slice chain. The state root mentioned here is the root node hash of the state tree (if you still don't understand what this means, then you probably need to supplement the Eth 1.0 class).
Epoch is a voting cycle, which is about 6.4 minutes, which is 64 slot periods. This is easy to understand: crosslink is actually the data state version of a shard at the end of an epoch.
However, there is still a need to explain a bit more about the design of the beacon chain. Crosslink is part of attestation data (probably translated as "witness data", which is the data submitted by the verifier to prove the state of the slice chain), and attestation data is an important part of the block data in the beacon chain. So in fact crosslink is part of the block data in the beacon chain and is used to represent the overall state of the system and the key data used to support cross-chip trading operations. A Jian:
It should be simple to explain that a single shard periodically deposits its own state certificate on the beacon chain as a cryptographic proof material required for cross-slicing communication.
Whether the popular point can be understood as: from time to time, it is necessary to report the status on the beacon chain, and crosslink is the customs clearance between the pieces.
Asynchronous transaction vs. synchronous transaction
Moderator: Vitalik explains the asynchronous transaction with the train hotel question and introduces a solution called “yank”. Can you talk about the train hotel problem and the principle of this solution?
The scenario of a train hotel problem is that the user wants to buy a train ticket and book a hotel, and wants to ensure that the operation is atomic, either to remain successful or not both. If the train ticket and the hotel booking application are on the same shard, it's easy: create a transaction that attempts to make a reservation at the same time, or both reservations succeed, or throw an exception and restore everything. However, if both are on different shards, it is not so easy;
The solution proposed by Vitalik is to allow the contract itself to move across the shards; the contract can be "pulled" from one shard to another, allowing two contracts that normally reside on different shards to be temporarily moved to the same shard, and then Perform a synchronization operation. Moderator: When it comes to asynchronous trading, there is naturally the concept of synchronous trading. Vitalik mentioned that it would be implemented by a method similar to Plasma. What did Teacher Ajian tell us about this?
Regarding the synchronous cross-slicing transaction, the current data is relatively small. I don't know how to use the Plasma method that Vitalik says.
However, I have heard of another synchronous cross-sliced transaction model proposed by Virtaik called Merge Block. The idea is to let the segment of the A slice be accompanied by the state witness, so that the verifier of the B slice can receive the block and its degree (technically, the verifier of the B slice becomes the stateless client of the A slice. ), then the block of the A slice can be executed simultaneously with the block of the B slice.
Another mode is to make a shard become the main shard in a given time, during which the certifier of the shard can read and write other shards arbitrarily, that is, the certifiers of other shards also preferentially execute the shard. Fragmented blocks, which also create synchronous execution.
In addition, it seems to be done with the sparse merkle tree, so it is similar to plasma (I don't understand my children's shoes, I ignore them…)
State lease: it may not be easy to implement
Moderator: Phase 2 is an extremely important stage for the Ethereum 2.0 platform. It reintroduces smart contracts. Each shard will manage a virtual machine based on eWASM. In addition, a state lease solution may be introduced. What impact will this have?
I have not followed up on the eWASM and the state leasing program, knowing nothing about it. However, state leasing will obviously face charging problems , not a simple one. There is a very good developer called Alexey Akhunov, who is doing state leasing research and EIP. You can pay attention.
I don't know much about eWASM. I still see if Poca's WASM has more advantages than EVM. There are also Microsoft's TTI and digital asset DAML.
In fact, there are hidden dangers of the so-called "state explosion" in public intelligence contract platforms like Ethereum. That is to say, when the usage of the network (mainly smart contracts) grows rapidly, it will lead to an explosive growth of system storage requirements.
Because the entire node needs to maintain the state of all accounts in the system and save the state of all contracts, after the fragmentation is implemented, the address space has increased by more than 1000 times, and the "state explosion" problem has reached a level that has to be solved.
The solution is to gradually discard the historical data randomly. The longer the data, the higher the chance of being discarded by the whole node, but from the perspective of the whole network, even the long-term historical data, there is still a certain chance to be obtained.
Then this creates a need, that is, I want my historical transaction data/contract data to be kept, as long as I can pay for the full nodes that save the data for me. This is the general reason for the so-called "state rent".
This kind of leasing behavior can be achieved relatively simply through smart contracts. But this program is still very early, and the latest update is at the end of last year. So its specific design and actual impact are not good enough. A Jian:
I subjectively feel that it may not be easy to implement, and it will introduce too much complexity in the economy.
Ethereum 2.0 Future: Unspeakable optimism but rare
Moderator: Ok, let's move on to the last question today: Some people say that the design of Ethereum 2.0 is too complicated. Vitalik's reply is: "In the past year, it has become quite simple, and The specification is less than the number of words in the Yellow Book. Many things in Ethereum 2.0 are much simpler than 1.0…", and Justin Drake said that phrase 0 needs about 1024 lines of code, and the code for phrase1+phrase2 is also 1024 lines. Dear teachers, what is your point of view, and is it optimistic about Ethereum 2.0?
Ok, I will come first, cold water, haha.
At present, the code size of all Ethereum clients is definitely calculated in 10,000 rows , so I think the 1024 lines of code mentioned in the question refer to the core algorithm or key processing code.
Then I feel that it is unscientific to measure the complexity of the system simply by the amount of code , especially from the professional point of view of software engineers. Because some projects with hundreds of thousands of lines of code can be very simple, a project with hundreds of lines of code can be very complicated.
In addition, you said that a system that can handle more than 1000 blockchains is simpler than a system that can only handle one blockchain, or that a voting system is simpler than a rolling system; sorry, I really can't agree. Although we spent so much time talking about Eth 2.0 in this issue, I still feel that the whole operation process is not clear or easy to understand, which is enough to show that the design of Eth 2.0 is not simple.
I think the biggest problem is that this is another experimental big project , and it has not been verified and tested enough. For example, the punishment mechanism and the cross-chip transaction can work normally. How does the network perform when the verifier is insufficient? Satisfactory answer; Eth 2.0's ability to be the first successful PoS public chain system remains my biggest concern . Judging from the engineering situation of the Ethereum community in the past one or two years, I am really optimistic. Zhang Weijia:
Let's take a little easier. Einstein's equation of mass energy has only one line, E = mc 2 , but it takes a long time to make it.
It seems that the number of lines of Ethereum 2.0 on Github is not much, but because of the large scale and the untrustworthy between the nodes of Ethereum, it is very important to change each line of code. I personally think that the eWASM stage is more difficult and more suitable for large companies . A Jian: I personally think that eth2 is indeed more complicated than eth1. Objectively speaking, this is because the introduction of PoS and shards has led to many factors in the system that we did not need to consider. But it may be because we don't understand it enough, and it may become commonplace if it is actually realized.
However, I also believe that Ethereum's research and development strength is rare, the ecology is the most abundant, and the community is the most active, so there is no reason not to be optimistic about Ethereum . Excluding those big noises, you will find that the consensus on Ethereum is stronger than many people think.
Moderator: Ok, due to the time relationship, we will stop here on the technical journey of Ethereum 2.0. In fact, there are many important technologies that we have not talked about, such as BLS signature, zk-SNARKs, plasma, status channel. And other technologies, as the teachers said, this sharing can't really explain the design of the entire Ethereum 2.0, and finally recommend https://ethresear.ch/ this website, there will be the latest about the Ethereum research community. Results.
The theme of the third SheKnows event "etc, bch who can become the data layer of eth? "It was triggered by Vitalik's recent proposal. I am very much looking forward to the conversation between @小别和江卓尔老师 and Terry teacher.
Thank you again for the wonderful answers from our guests and the ETC Asia Pacific community for their support of this event.