Dry Goods | Security and Scalability in Commission-Based Sharded Blockchain

The purpose of this article is to analyze the tension between security and scalability in a sharded blockchain in a simple and understandable way. The paper discusses the necessity of committee-based sharding, and elaborates possible problems and trade-offs in sharding design.

Scalability Dilemma of Blockchain

The " scalability trilemma " theory holds that we cannot create a blockchain that can simultaneously have the following three points: 1) low overhead for running a full node, 2) high transaction throughput, and 3) maintaining security. It is obvious that simply increasing the block size (although it can increase the throughput) will make it more expensive to run a full node. The reverse is also true: we only need to increase the target requirements for deploying a full node to maintain security while supporting higher transaction throughput.

Many blockchains have introduced or are introducing some new consensus protocols (many of these projects are initiated by people in the academic circle, so these projects are also nicknamed "professional coins", but these professors do not quite understand scalability Trilemma). These new consensus protocols claim that they support higher transaction processing per second (TPS) than Satoshi consensus, but I have never seen these protocols do the same with Satoshi consensus under the same hardware, network and attack conditions. Compare. Without exception, they benchmarked the consensus protocol on a super-strong cloud server with a huge network capacity, and then "discovered" that their consensus protocol can provide transaction throughput higher than Bitcoin (3-7 transactions per second). the amount. In other words, these chains focus on the vertical scalability of the shards rather than the horizontal scalability.

What I meant above was: Consensus and scalability ( that is, transaction throughput) are almost orthogonal (translator's note: that is independent of each other and does not affect each other). The design idea of ​​Professor Coin may be affected by the confusion of terminology. In distributed consensus protocols, scalability means latency or the number of participants, while in the context of blockchain, it is throughput. As long as every full node in the network must verify every transaction , the bottleneck is the execution of the transaction . Different consensus protocols may provide other useful features, such as lower first confirmation latency and faster block finalization speeds, but these do not affect scalability .

So, if we can't solve the scalability trilemma with a brand new consensus protocol … what can we do?

It should also be reminded that the number of consensus nodes (often referred to as "decentralized" attributes) does not itself affect scalability. The first point of the scalability trilemma is often incorrectly referred to as "decentralization", but in fact it is about the cost of running a full node, not the number of consensus nodes. For example, people often say that EOS is a centralized system around 21 validators, so it has high throughput, but this is actually wrong-EOS is scalable because its full nodes have extremely high hardware And network requirements. A consensus protocol that can support a large number of active participants is certainly worth pursuing, but this will not directly affect the scalability of the chain (if we ignore the overhead of the consensus protocol. We will discuss this in detail later).

Sharding: Solving the Dilemma of Scalability

To solve the dilemma of scalability, blockchain sharding seems to be a more promising method. In a sharding system, the execution of transactions is not completely repeated on all nodes . In theory, this method will provide a constant factor increase in scalability as the number of shards increases. Of course, this is only in theory, because there are many things to pay attention to! The following analysis is mainly for Eth 2.0 , because I am familiar with this one, but these analyses should be applicable to all sharded blockchains.

How sharding provides scalability

In this section, I will provide a more intuitive overview of the features required for a secure and scalable sharded blockchain , which is formally demonstrated in the paper " Divide and Scale: Formalization of Distributed Ledger Sharding Protocols ." I recommend reading Buterin and the author of the discussion on Twitter .

I mentioned above that sharding means that all nodes do not completely repeat the execution of all transactions. But how exactly does this work? If each node has to verify each shard chain, then it is not sharding-just like a blockchain, its blocks are the size of all the shards together, the same before and after the shard. Scalability. If we allow each node to choose a shard and be responsible for verifying each block in the selected shard, then a weak adversary can easily launch an attack on a single shard. In this case, it is possible that something that violates state security (such as illegally making money) on one shard will affect all other shards.

The solution is that validators must be shuffled (or "rotated") into committees, and each committee is a subset of all validators. The system must know the results of the shuffle and the responsibilities assigned to each verifier, so that it can be held accountable and punished when provably malicious behavior occurs (unfortunately, using VRF cannot solve this problem). Regardless of other details, the process of generating a block is probably: a committee generates a block for a period of time and provides witness data for the block, and then the verifiers in the committee are shuffled into another committee and are responsible for Another shard (maybe the original shard).

Potential problem 1 : If the verifier is assigned to a new shard, they need to download and execute all blocks generated on this shard after they left the shard last time, and sync to the latest block of this shard, then Sharding does not provide scalability and is still essentially large blocks. This can be solved in two ways. First, using the concept of a stateless client , only one state root is required to execute a transaction, and each transaction provides the necessary evidence to the state database. This eliminates the need to store a large amount of state, but in order to ensure that the latest state root submitted to the shard chain is correct, all blocks still need to be processed. Second, suppose that at least one honest participant exists in any shard to which the verifier is going. As long as there is at least one honest participant, a false proof (Fraud Proof) can be generated for the submitted invalid state root.

Potential problem 2 : If a sharded block is hidden, the verifier cannot generate a fraud certificate for this sharded block. Because most of a committee can signify the validity of a block, most of the collusion can create an invalid block and then hold the data. Honest validators must request shard blocks from other validators globally and one by one. Since " data availability proves to be the most basic mathematical element, without this, a sharded blockchain cannot maintain security and scalability at the same time.

These potential problems have been solved. It seems as if " all the research breakthroughs needed to fully implement eth2 " are in place. In order to implement a scalable and secure sharding chain that can run full nodes with Raspberry Pi , only the perfect Implementation details are fine.

This is not the case.

Network communication in shards: a dilemma

One issue that needs attention is not listed above, but it is very important. Existing general schemes are to create and verify shard blocks at the consensus layer, but ignore: what exactly is included in the block? The answer seems obvious: trading. However, users need a way to pass transaction data to shard block producers. This is exactly the problem: if all nodes in the network need to download all transactions (as in a general blockchain network, transactions are broadcast to all nodes), then sharding will not provide data throughput throughput Any scalability.

Is there a way to send transactions in a distributed network and need to let every node download all transactions? There are many ways! Use a gossip protocol, such as gossipsub , where each node maintains a local list of a topic (such as the ID of a shard can be used as a topic), including the peer nodes that the node is listening on. With this list, transactions can be reliably sent (to some extent) through the network to those nodes that are interested in the transaction and must download and share the corresponding transaction. Has the problem been solved? No, because it makes the system vulnerable to attack.

The attack method is as follows: If the attacker can match the ID of each verifier with the IP address of the node, they only need to let most dishonest committee members, DoS attack those verifiers who do not collude / reject bribery to attack the fragment , You can easily break the "at least one honest verifier" assumption. Although the Eth 2.0 protocol itself does not require the IDs and IP addresses of the verifiers to be bound, the heterogeneous network topology of the gossip network has no privacy at all. For an attacker, the nodes are dispersed and intersected in the network to obtain the node ID. Corresponding to IP is easy. To resist this attack, the homogeneous network topology may be safer, but as mentioned above, because all nodes have to download all transactions, there is no scalability at all.

The privacy protection work of the network layer of the scalable blockchain is nice to say that it is in its infancy . Given that the Ethereum Foundation has not previously made network layer privacy a priority for Eth2.0 development, I do not expect researchers to seriously address this issue in the short term. This is just one of many open research questions. Without a positive answer, a secure and scalable sharded blockchain cannot be achieved.

The root of the problem is that Eth 2.0 (and other commission-based sharded blockchains) moved complexity and security from the consensus layer to the p2p network layer. Therefore, if the basic assumptions about p2p network capacity are unrealistic, then any proof of security for the system is meaningless (as of this writing, no related articles have appeared). The malicious behavior of the consensus layer can be punished (such as confiscation of funds, that is, burning its currency), but the malicious behavior of the network layer cannot be punished. Transferring the security of the blockchain to the latter will undoubtedly make the blockchain vulnerable even in the face of weak adaptive adversaries-it will certainly not survive the third world war .

Consensus overhead

The previous section covered the broadcasting of transactions, but there was another piece of data that needed to be propagated across the network: attestation. Eth 2.0 seems to support a very large number of validators-much more than PBFT-based protocols (such as Tendermint , which is limited to a few hundred nodes), because in the worst case, the number of messages based on the PBFT consensus protocol Will increase in a square order as the number of nodes increases. So how did Eth 2.0 achieve this miracle?

It is achieved by transferring complexity from the consensus layer to the network layer. The overhead of aggregate signatures is inherently large. Some practical solutions, such as Handel , rely on explicitly connecting the ID and IP address of the verifier.

In short, the reason why the set of consensus nodes can be made larger than that supported by traditional PBFT protocols is because witness messages (certifier signatures) are aggregated at the p2p network layer . This process will have a high degree of communication complexity, and a protocol has not yet emerged, which can guarantee the aggregation of Eth2.0 target numbers of verifier signatures in a malicious network environment and a short block time.

to sum up

Commission-based sharding requires stateless execution (which has not proven itself to be feasible ), plus proof of error and proof of data availability. However, when considering the spread of transactions, we found that scalability can only be achieved by moving the security of the scheme to the p2p network layer, which is very fragile against malicious counterparties. Whether we can overcome this obstacle remains an open research question.

Thanks to Mikerah Quintyne-Collins and James Prestwich for the comments.

Original link: https://medium.com/@adlerjohn/security-and-scalability-in-committee-based-blockchain-sharding-58fab3901193 Author: John Adler Translation & proofreading: haiki & A sword

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Web3

Bybit’s Crypto Ark: A Journey to Reshape the Future of Crypto Collaboration

Fashionista, get ready for seamless web3 adoption with Bybit's new Crypto Ark Trading program!

Market

SEC might postpone approval of spot Bitcoin ETF as it reviews comments on newly submitted S-1 forms.

There is currently a disagreement among observers regarding the SEC's decision to add comments on S-1 forms, which ha...

Blockchain

Crypto Showdown: SEC vs Ripple - The Battle of the Tokens

The cryptocurrency community is abuzz with anticipation for a potential settlement in the Ripple case, as the SEC pre...

Market

Worldcoin (WLD) price soars by 20% as Bitcoin jumps, crypto whale makes a $2.5 million profit.

The value of Worldcoin (WLD) is on the rise, which is a notable achievement in the wake of Bitcoin's (BTC) dominance ...

DeFi

Cardano’s Indigo v2 Upgrade: A New Era for Synthetic Assets

Exciting news for fans of Indigo - the leading DApp on the innovative Cardano blockchain! The Indigo team has hinted ...

Blockchain

Blobs Launch on Gnosis Chain ahead of Ethereum's London Upgrade.

Gnosis Chain is taking a significant step in enhancing its ecosystem by integrating blobs. This strategic move also l...