Wuzhen·Conflux CTO Wu Ming: Making the “impossible triangle” of the public chain possible

On November 8th, the “2019 World Blockchain Conference • Wuzhen” hosted by Babbitt was officially opened. The conference gathered more than 100 global blockchains, digital assets, AI, 5G experts and scholars, and technical geeks. Opinion leaders and founders of popular projects, with the theme of “application unbounded”, explored the application of blockchain, technology frontiers, industry trends and hot issues, and promoted blockchain technology and industrial innovation.

Conflux co-founder and CTO Wu Ming delivered a keynote speech entitled "Let the decentralized public chain system approach optimal performance."

Highlights of the highlights:

  1. An ideal public chain system should have three characteristics: A. Robustness; B. High performance; C. Decentralization.
  2. Bitcoin and Ethereum and other public chains adopt the “Zhongbencong Consensus”, which has a slow block and low throughput.
  3. The GHOST protocol uses the most subtree rules to overcome the security issues caused by forks.
  4. The GHOST protocol can increase the block rate without worrying about dual attacks.
  5. The structured GHOST method does not let all blocks affect the choice of the main chain, only a small part of the block affects the main chain selection.
  6. The PAST collection and the Epoch concept implement automatic mode switching and determine the block out order, respectively, so that all blocks can contribute to the throughput of the system.

WeChat picture_20191109161455

The following is the full text of the speech, Babbitt finishing:

Hello everyone, I am very glad to have the opportunity to share the technical progress of Conflux here.

Conflux is a high-performance public chain project, we are going to build a high-performance public chain system. The so-called high performance, we can get thousands of TPS throughput, about 3,000-6 thousand. And our system is able to confirm a transaction within half a minute, and Conflux achieves such performance without sacrificing any decentralization and security.

Let's first look at the background of the problem. With the emergence and development of Bitcoin, the blockchain has been well known by more and more people. One of the most representative features of the blockchain is its distributed ledger, which is very powerful and can provide Internet-level transaction records, which makes many technological innovations in today's applications, such as financial systems, supply chains, and Medical health and so on. However, in the existing public chain system, it is still largely disturbed by performance problems, which makes it hindered when the actual scene falls. For example, Bitcoin, which processes 7 transactions per second, confirms that it takes an hour to trade a bitcoin. Ethereum is 30 transactions/second, and it takes 10 minutes to confirm. But like the centralized system VISA, it can easily provide 3 thousand TPS throughput and confirm in seconds.

We know that there are many components in a public chain system, and different components have different performance characteristics. For example, storage, we need to store the blockchain book. Network, network switching blocks and transactions are required. We need computing resources to execute this transaction. But there is another important link in the public chain: consensus. The current public chain system consensus is one of the most important. We recognized this problem and found some solutions, so we have such a team. Our team is based on Academician Yao Zhizhi as the chief scientist and the only Chinese scientist to receive the Turing Award. The other two founders, Long Fan and Zhou Dong, returned overseas to study and do this project. They are also the gold medal winners of the International Olympic Information Competition.

We believe that an ideal public chain system should have the following three characteristics:

1. Sufficient robustness (Robustness). In other words, it can resist both double-attack and survivor attacks, which means that your system should continue to progress at all times. 2. High performance. The so-called high performance is that this system should have high throughput and short transaction confirmation delay. 3. Decentralization. This system is capable of hosting thousands of nodes that can join and leave the network without permission. The benefits of decentralization can be independent of the trust of each central entity.


As far as we know, all existing systems, without any one of the public chain systems, have achieved good results in all three aspects. For example, systems such as Bitcoin and Ethereum have good robustness and good decentralization, but their performance is very bad. Another type of system is based on the Byzantine protocol. Such a system is robust and performance is OK, but it sacrifices decentralization because its consensus mechanism is accomplished by selecting a small committee. Conflux is The only system that can do very well in all three areas.

Why is the Conflux system even better than other existing systems? Let's take a look at how Bitcoin and Ethereum work. Bitcoin, Ethereum and other decentralized public chains, they are deployed in a P2P gossip network, a chain is formed between the block and the block, the chain is actually the account book that stores the transaction records, this account will also be copied to On all nodes. Although decentralization has the benefit of trust, decentralization can also be attacked. Any node can join this network, that is, the bad guys can construct a lot of node control costs at will, so the mechanism of proof of work is used in Bitcoin and Ethereum. If you want to influence the book, you have to pay the power. . If you want to do it, you have to pay for it.

In the Bitcoin and Ethereum networks, the “Zhongbencong Consensus” is adopted. A very important principle is the longest chain rule. What is said in the "longest chain" rule is that all good people, they will only think of the longest chain as a valid transaction record. Here is a security hypothesis. As long as a good person has more than 50% of the computing power, the longest chain should be generated by a good person node.


Such systems usually use a very slow block rate. Bitcoin is a block of 1MB every 10 minutes. Ethereum is a block with 15 seconds and the throughput is very low. Why can't you increase the block size directly, so that the system throughput rate can be increased? But simply doing this will not work, because if you do this, the structure of the book will look like the picture, there will be a lot of forks. The reason is that the blocks are parallel and all nodes are concurrent blocks. When a block comes out, it takes time to propagate this block over the network. That is to say, when a block is broadcast and broadcasted up, other nodes cannot see the block immediately, and they will follow the old block to dig, which will result in a fork. If the block size is larger, the longer the delay on the network, the more concurrent blocks will be generated and the more forks will be. If the block rate becomes higher, the effect is the same.

Forking a lot of books will cause problems. First, according to the "longest chain" rule, only the longest chain blocks will be considered valid, and the other branches will be discarded, which will result in network and Dealing with waste of resources. Another important point is that it also sacrifices security. Specifically, assuming a fixed number of blocks, the longer the chain, the shorter the longest chain. Assuming that the longest chain of blocks is only 10% of all blocks, that means the bad guys can tamper with your account with just 10% of the power.

Later, some researchers invented the GHOST protocol to overcome the security problems caused by the fork. In GHOST, all nodes still have to choose a main chain. The rules they choose are not based on the longest chain rule, but on the most subtree rules. We choose the main chain from the creation block, and choose the main chain of the creation block. It will iterate to see its sub-blocks. For example, the initiating block has two sub-blocks A and B. The sub-tree of A has 6 blocks, and B has 5. Because the sub-tree of A is heavier than B, we will put A joins the main chain. By doing this in order according to the same rules, you can select CEH to the main chain. When a new block is created, it is better to follow the last block in the main chain.

The difference between the heaviest subtree rule and the longest chain rule is that our choice of subtree is not only the contribution of the subtree on the longest chain, but also the partition on the fork will also contribute to the choice of the main chain, so that "bad guys" "There must be 50% of the calculation power to influence the choice of the main chain. A is assumed to be a block on the main chain. If the bad guy wants to use the A. upper block and replace the position of A on the main chain, the A. index needs to be generated, which is heavier than A. The bad guy needs more than 50% of the computing power to be able to tamper with A's position on the main chain. In this case we want to confirm a transaction, the subtree of A needs to be much larger than A. A. I want to replace the position of A on the main chain, and the probability decreases with time. The higher the amount of the block, the shorter the confirmation time.

With the GHOST protocol, you can get blocks at a high rate of blockouts without worrying about double-attacks. Is the problem solved? No, GHOST is also affected by survival attacks. If the good people block is divided into GroupA and B, it is assumed that there is no delay in communication between the internal blocks of the Group, and there is no delay in communication between the good person and the bad person. Because there is a delay between the two groups, it is possible that at some point the book will have two forks of structure A and B. The bad guys can secretly observe the book structure and secretly create new blocks on the two forks. These blocks don't tell good people. When the GroupA node generates some new blocks, it will pass the node to B, but the transfer takes a while. In this process, the bad guys first happened this thing, so they told B that they had secretly dug on the B fork. B would think that the original B fork was heavier than the A fork, and it would follow B. The fork is to dig. On the contrary, the bad guys will choose the right time to tell A in the pre-digging block on the A fork, and will think that the node of A is heavier. This process can continue, causing the fork to remain, that is, the transaction is not The method is confirmed.

The solution to this problem is called the structured GHOST method. We should not let all blocks affect the choice of the main chain, only a small part of the block to influence the choice of the main chain. A small number of blocks can influence the choice of the main chain, which means that the probability of occurrence of these blocks is relatively low, and the probability of their concurrent occurrence is relatively low. They are less prone to bifurcation between the blocks that affect the choice of the main chain.

Structure Ghost

For example, the dotted line block in the above figure is a block with no weight and does not affect the main chain. At some point, such as A's fork, there is a block with weight. At this time, B may not see it immediately, so B will still generate a block on B's fork. But after a delay, B will find the right A block sooner or later. At this time, B can judge that the original A's fork is heavier than B's fork, because A has a weighted block. At this time, B can create a new block on the fork of A, thus breaking the balance.

What about those blocks that have no weight? We still hope that the transactions in those blocks can contribute to the throughput of the system, so we need a deterministic sorting algorithm that allows all block trading orders to be consistently determined. The throughput of the system can be increased. However, such a system still has a problem of acknowledgment delay, because we still have to wait for enough of the right to re-apply to confirm the block, the time is still very long.

Looking back at the two situations just mentioned, one is the original GHOST method, which can quickly confirm the transaction without attack. But there is no progress in the case of an attack. Is there a way to make the two systems guarantee the progress in the case of an attack without attack? Yes, one of our methods is GHAST, which allows all nodes to choose a main chain, the most subtree rule. Our main chain is called the number axis chain. We also invented a certain sorting algorithm, so that all nodes can make a consistent ordering of all the blocks according to the main chain, which allows all block transactions to contribute to the system's throughput rate, thereby improving the system's effectiveness. The system operates in the original GHOST mode to achieve the most efficient. Our system will go to the attack attack, if the attack occurs, it will give a small weight to a part of the block, thus ensuring the progress of the system.


How to automatically do mode switching, the block constitutes the structure of the dendrites, an important concept here is the PAST collection. According to this set, if all the edges of the follow can be edited into a collection of blocks, this block is generated. Before this block, this is called the PAST collection. For example, this figure shows the block PAST set of E, which shows the PAST set of the A area. How to adapt to the weight of the block, look at what a block of the tree is like. If the Past set is found to be not stable enough, we assign the block to H with a small probability, 1/H, and assign 0 to other blocks. Each book has a fixed Post, and all nodes can consistently determine the weight of a block. A normal situation, for a block, such as block A, after the system runs for a period of time, the book structure, the sub-tree of the book will be gathered under a child, such as A., the sub-tree of A. should be A is the majority of all subsequent blocks. For each block in the Past collection, determine whether there is a violation, if it is violated, switch to the GHOST structure.


How do you determine the order of the blocks so that all blocks can contribute to the throughput of the system? Our idea is to introduce an Epoch concept, where each tree axis chain defines Epoch, and other forked blocks can be scattered into the corresponding Epoch according to the rules. In this case, the blocks are sorted according to Epoch, and sorted according to the topological order of the graphs within Epoch. This method can defend against double-shot attacks. The main principle is this, because the ordering of blocks is determined by the tree axis chain. If the tree axis chain is unchanged, the block ordering will not be changed. In addition, the choice of our tree axis chain, because the application of the most sub-tree rules, only more than 50% of the computing power can change the choice of tree axis chain. Based on these two assumptions, the system can obtain dual-issue attack defense under the same assumptions as Bitcoin.

The confirmation rule is that for any transaction, we will first find out which Epoch the transaction is in and find the tree axis chain block corresponding to Epoch. It can theoretically estimate whether the probability that the block has been tampered with is less than the user can afford. Risk, if less than the risk the user can bear, we can confirm the transaction. Under normal circumstances, the confirmation time is very short. Our upper-level execution environment is compatible with Ethereum smart contracts, the test network has been released, and the main network is scheduled to go online on Q1 next year, that is, in March.

Let's talk about the application scenarios. We think that the most important application scenario in Conflux is cross-border payment and cross-border remittance. The other is how to support a more efficient decentralized exchange. Also pay attention to a scenario, how to support the toC-side deposit certificate application, so that users can enjoy the credit value data in the Internet data. We believe that with the high-performance public chain like Conflux, it is possible to achieve these scenarios.

In addition, we recently lost the dendritic blockchain research center with the support of the Shanghai Municipal Government, in order to continuously promote the technological progress of the blockchain, so that China's blockchain technology can maintain a leading position in the world.