To achieve a 10,000-fold expansion of Bitcoin, what is the new agreement Prism created by MIT and Stanford?
Note: This article is a record of Stanford Blockchain Week's speech. It is a speech by Lei Yang from MIT, entitled "Prism: 10,000 times the expansion of Bitcoin".
Hello everyone. I am very happy to talk about our implementation and evaluation of the Prism consensus protocol, which is 10,000 times better than Bitcoin. This is the result of a collaboration between MIT, Stanford University, and several other universities. I want to thank my collaborators, including my mentors and other relevant participants.
We all know that Bitcoin has good security, but in terms of performance, it is not. It can only process 7 transactions per second and each takes several hours to confirm. Bitcoin connects performance with security. Bitcoin associates these parameters with a design parameter: the mining rate.
- Flash loan strategy: Can an attacker take Maker's $ 700 million collateral?
- Data analysis of 731 companies worldwide: How does the market react to companies involved in blockchain?
- Vitalik's latest speech: 51% of nested attacks become a deadly threat to PoW blockchain, PoS is the only way out
If we want to improve performance, we will suffer a lot in terms of security, and vice versa. All Prism has to do is break that connection.
We have naturally extended the Satoshi consensus agreement. We deconstruct it and extend each part. The prism protocol is a very natural and beautiful protocol. It's very simple, we applied it to 10,000 lines of Rust code in just 6 months.
Just before the talk, I launched 100 virtual machine instances on AWS. I will publish a Prism instance on each virtual machine. The code is open source on github.
In this talk, I will discuss the Prism consensus protocol, then I will discuss the system deployment, and then we will evaluate it.
Let's quickly review the Bitcoin consensus protocol, which uses the longest chain rule, and each miner will try to expand and mine the longest chain. When a fork occurs, when two blocks are accidentally connected to the same parent block, the miner will choose the longest fork. They always choose the longest chain.
This involves a parameter called mining rate, which defines the average speed at which miners can mine new blocks. In Bitcoin, this speed is 1 block every 10 minutes.
An important part of the Bitcoin protocol is the confirmation of transactions. A transaction appears and we can confirm it immediately, but there is a problem. It is possible for an attacker to mine a private chain, because the mining process is random and is a poisson process. It is possible that this miner is very lucky and can mine faster in the first few blocks. When this happens, the attacker will release his own private chain, and suddenly our transactions no longer come from the real longest chain.
Satoshi Nakamoto proposed a solution. Instead of confirming the transaction immediately, it is better to wait until it is confirmed that it is already in the longest chain. Even if the attacker is lucky to get the first few blocks, the attacker is unlikely to win in the long run. Satoshi Nakamoto calculated in his white paper. So you have to wait for a certain number of blocks to get the possibility of immutability.
Throughput is the average speed of Bitcoin confirmation transactions and how many transactions can be confirmed over a period of time. It is calculated by multiplying the block size, how many transactions per block, and the mining rate (the speed at which we mine these blocks How fast). Then there is the delay, which is how long it takes to confirm a transaction. This calculation can be done by dividing the depth and mining rate.
To improve performance, we can increase the mining rate to achieve higher throughput and lower latency. We can also increase the block size for higher throughput. However, because of forks, this does not work. If we increase the mining rate or increase the block size, we can only get more forks. This is because a fork occurs when two nodes on the network cannot propagate messages to each other fast enough, so that they accidentally mine on the same parent block. If we increase the mining rate, two nodes are more likely to accidentally mine new blocks in a short period of time. If we increase the size of the block, the time it takes for the block to propagate throughout the network will be longer, which will lead to an increase in the fork rate. Forking is not a good thing because it compromises security. Forking reduces the ability to honestly mine.
Deconstructing Prism
Now let's talk about Prism. We disconnect performance and security to improve performance. In Bitcoin, a block actually has two functions: the first is that the new block will add some transactions to the ledger, and the second is more implicit, the new block will also have all indirect and direct blocks on it For certification. This is like saying that a certain chain is the longest chain.
Therefore, the first effect is directly related to throughput. The faster a transaction is submitted, the more transactions can be included in the ledger. The second role is related to the confirmation delay: the faster we vote, the more confident you are in the transaction.
Throughput
In Bitcoin, we have seen limited throughput due to forks. What if there is a structure that has no concept of forking at all? That's why we introduced a "transaction block".
"Transaction blocks" are mined by miners at a higher mining rate, but there is no chain structure between transaction blocks. So we can dig them out very quickly without causing a fork because there is no fork at all in this design.
We then bring these "transaction blocks" to "proposer blocks". These are lightweight blocks containing only hash instructions pointing to these transaction blocks. The "applicant block" will be mined at the speed of Bitcoin and connected together with the longest chain of Bitcoin. Because our mining speed is very slow, and these blocks are small, no matter how you increase the throughput, it will not lead to increased forks.
This is why Prism can implement the physical limitations of the underlying network without sacrificing security. Prism is based on both the longest chain and proof of work.
Miners will mine two types of blocks at the same time. They will use the hash of the block to decide what type of block it will become: a transaction or an applicant.
Delay
In order to deal with the delay, we first noticed that even if we introduced the applicant chain, the voting rate would not increase, as only one applicant chain was voting. So let's do another deconstruction to separate the voting and applicant blocks.
We divided the applicant block into two parts. One contains a hash instruction pointing to the transaction block, and the other is a voter block, which only contains instructions to vote on the applicant block. We no longer use the longest chain structure in the applicant block. Instead, we assume that there is a longest chain structure like Bitcoin in the voting block.
So far, we have not improved the confirmation delay, because we still only have one voter chain, and it is very important that this voter chain is secure, so we still need to wait for 25 confirmations, otherwise the attacker can attack the bit like Coins attack the voter chain.
Now that we have separated applicants from voting, we can introduce many voter chains, such as a 1000 voter chain. An interesting finding is that now that we have a chain of 1,000 voters, we don't need every one to be secure. Even if the probability that an attacker can attack one voting chain is 30%, the probability that an attacker can attack 500 voting chains at the same time is still very low. This is why we can reduce the confirmation delay in Prism without sacrificing security.
3. Performance
In a paper we published in 2019, we proved mathematically that as long as the opponent's computing power is less than 50%, Prism can guarantee security, activity and consistency like Bitcoin. For throughput, Prism provides 1-beta times c, where beta is the adversary's ability and c is the underlying network throughput. For the confirmation delay, Prism's confirmation delay is proportional to the underlying network delay d, proportional to the confirmation reliability, and inversely proportional to the number of voter chains.
This looks promising, but let's see how it performs in the real world. This cannot be told by theory. First, we must acknowledge that this protocol is a little more complicated than Bitcoin. Secondly, in theory, we assume a simplified network model. Remember when we described Prism's confirmation delay, we used the big O symbol, so it is proportional, we don't know the constants here; finally, there must be other bottlenecks in the network, so what are these bottlenecks ?
This is why we do system deployment.
Deploy Prism
We implemented Prism deployment through 10,000 lines of Rust code, which is based on UTXO model and pay-to-pubkey transactions. The code is open source.
Prism can achieve 70,000 transactions per second with a delay of 10 seconds.
The role of the blockchain client is: 1) Participate in consensus. It receives new blocks, connects them into a chain, and mines new blocks. 2) Equally important is the order in which it can acquire blocks and generate transactions.
The core role of the Prism consensus protocol is to sequence transactions. Unlike Bitcoin, we allow double spend transactions. As long as everyone agrees with this order, then they can calculate the UTXO set and delete invalid or duplicate spend transactions on their own. This is the job of the ledger record module, which executes sequentially in order of transactions to generate the final updated UTXO set.
After deployment, we learned that Prism can transfer the bottleneck of the blockchain from consensus to ledger records. The bottleneck lies in the storage of UTXO sets. To achieve high throughput, you must process consensus and accounting in parallel.
We made some optimizations. We did “Scoreboarding” to parallel bookkeeping. The ledger we make remains asynchronous and consistent so that we can achieve consensus in parallel. We also introduced a functional design pattern on the client side and disabled transaction broadcasting for higher network efficiency.
Scoreboards are used in modern CPU designs to execute parallel instructions. Suppose we have two CPUs and want to execute two transactions at a time. If the second transaction starts, then the second transaction will pass, but the first transaction will fail because it is out of order and is incorrect.
The scoreboard technology introduces a very simple form. Before we execute a transaction, we will record what currency this transaction will contact, what currency will be used, and what currency will be generated. Before executing another transaction, we will look at the scoreboard to see if any coins we are going to produce or use already exist in the scoreboard. Before starting the first transaction, we will mark a and c on the scoreboard. If we encounter a coin again in subsequent transactions, a conflict will occur. Then we executed a different transaction and we found that they could do it at the same time. For the remaining transactions, we found no conflicts between them, so they could be executed together, and we got the correct results.
After the optimization, the client achieved almost linear expansion on the CPU core, that is, to 8 cores, and based on this, the SSD and database set an upper limit for performance.
Another important optimization is that we perform asynchronous accounts based on consensus. We noticed that in Prism, the frequency of ledger updates is very low compared to the speed of block mining, because we have 1000 voter chains. Every voter chain and each individual voter block is very unlikely to trigger some new transactions to be confirmed. Unlike Bitcoin, each new block triggers some transactions to be confirmed. Therefore, instead of recording every time a block is received, we put it in an asynchronous loop from consensus logic so that it can run in its own loop. We can delete a lot of locks in the consensus logic, and our consensus logic is global-free, so it can use multiple threads and multiple CPU cores to process upcoming blocks in parallel, so that it can maintain a higher area Block rate.
Deployment assessment
Finally, I will present the results of my assessment. We used 100 to 1000 AWS EC2 instances, these are all lightweight instances, not as good as the laptop I use now. We connect the nodes into the topology-each node is connected to 4 random nodes. We introduce a 120ms propagation delay between each node. For each node, we set 400mbps ingress and egress bandwidth limiters to simulate the real internet.
We compared our deployment with Bitcoin and bitcoin-ng, and tested whether Pism can scale to more nodes (such as 1000 nodes here), and evaluated our deployment in terms of CPU or bandwidth utilization is it effective. Finally, we evaluated the performance of "Prism" under different types of attacks.
Prism has 70,000 transactions per second. We can see that the longest chain protocol combines performance and security, so if you want to increase the block size for higher throughput, you must reduce the mining rate to get higher latency. This is a trade-off curve. If you increase the latency, you can increase the throughput, but if you want low latency, then you have to pay the price of throughput.
Our parameters are in the longest chain protocol, so it is much faster than today's Bitcoin. However, our confirmation latency cannot be less than 100 seconds, and the throughput cannot exceed 10,000 transactions per second.
We deployed in bitcoin-ng by modifying our code, and we see that bitcoin-ng achieves better throughput, but the latency is still like Bitcoin or the longest chain protocol because it essentially uses the longest chain protocol Vote and confirm.
Finally, we compared with Algorand. We ran the public Algorand code in the same AWS topology setup and we found its throughput limit to 1000 transactions per second.
We then evaluated the protocols under different security settings. We found that we only paid a little bit for Prism's confirmation latency, and we did not lose any throughput, because in Prism we decoupled throughput from security.
Finally, we also tested prism with higher security settings.
Our deployment is very efficient. More than 75% of the CPU power is used for necessary tasks such as checking signatures and updating UTXO sets. For bandwidth, since we use 1000 voter chains, it may seem inefficient, but in fact more than 99.5% of the data is actually useful data, and less than 0.4% is the voter block.
Prism's deconstruction is to deconstruct the blocks in Bitcoin into an applicant and a voter, and expand each part to approach the limit of performance. We found that in order to achieve high throughput, we must parallelize everything-consensus and accounting clients.
We found that prism can shift bottlenecks from consensus to billing, SSD, and databases. Moreover, Prism is a natural extension of Satoshi Nakamoto's consensus.
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- How is the value logic of Bitcoin different from Buffett's stock logic?
- Babbitt Column | Understanding the Decentralization of Cryptocurrencies
- SheKnows 丨 Bitcoin halves wealth not halved! Miner's "Operation Guide" gives you directions
- Jincha failed? Analysts say Bitcoin will continue to rise after price correction
- UN ranks in Forbes' second list of top 50 blockchains
- Article overview of fintech infrastructure change trends and market panorama
- Viewpoint | Blockchain can solve three major pain points in the commodity industry