Technical Primer | GHAST Rule Upgrade (Part 1)

Editor's Note: The original title was "Conflux Research Institute | Self-evolution of Roast-Upgrade of GHAST Rules (Part 1)"

There have been several previous articles on the GHAST consensus in Conflux.

To put it simply, the GHAST consensus is based on the rules of GHOST, making the block weight of each block possible in multiple ways. When the network is calm, the weight of the block is 1, similar to the rule of a GHOST with a fast block generation speed. When an attack occurs in the network, the probability of the weight of the block is 1/360 is 360, and the probability of 359/360 is 0. This is equivalent to slowing the block production speed 360 times.

When judging the weight of each block, we must first read the parent block of this block, the reference block, the reference block of the parent block, the parent block of the reference block, … In short, along the father Edge and reference edge, we put all reachable blocks together to form a tree graph, called "block history tree graph".

For a block generated by an honest node, this historical tree graph is actually: "when this block is generated, what is the tree graph structure of the node that generated it." And the GHAST rule is based on the historical tree graph. There are no signs of being judged. Since the parent and reference edges of the block are immutable, the historical tree graph calculated by these edges is also immutable, ensuring that everyone's perception of the block weight is consistent. More detailed description is provided in "Detailed Adaptive Weight" GHAST "(1)-Overview".

The purpose of the GHAST rule is to deal with survivability attacks. After upgrading the GHAST rule, we have changed the "rule tree graph has been attacked" judgment rule to a very simple one. (There is no need to try to understand the obscure version before)

"In this treemap structure, for each sufficiently old main chain block, its largest child has a clear advantage over the second-largest child (on the subtree weight). Otherwise, this treemap structure is attacked. "

Why do we want the oldest child to have a clear advantage over the second oldest child?

Everyone recalls the "balance attack" we talked about before (click to read details). The bad guys managed to construct two subtrees of similar size, so that the good guys could argue over which subtree was bigger. And if the oldest child has an overwhelming advantage over the second oldest child, then the oldest child will undoubtedly become the next main chain block. The crown prince ’s battle determines the victory and success, and the succession of the throne will not cause the phenomenon of nine sons winning.

Why do we only have such requirements for a sufficiently old main chain block?

Because for the younger main chain block, it is normal for its children (on the weight of the subtree) to be evenly matched. If “biggest children have obvious advantages” for these blocks, it will only put the entire system into a neurotic state without improving safety.

l So how do we quantify that there are obvious advantages and old enough ?

It is relatively simple to have obvious advantages , as long as the oldest child has 1000 more children (on the subtree weight) than the second oldest child.

It is more troublesome to judge whether the block is old enough . Theoretical analysis shows that whether a block is old enough should be determined by its generation time, not the time when you first see it. In other words, if a block has been hidden by the bad guys for a long time, and then it is released, even if everyone just saw it, it should be old enough. Earlier we talked about a situation in "Segmentation Attack" (click to read details), a block that has been hidden by the bad guys for a long time, and can be entered into the main chain as soon as it is released. Therefore, whether the main chain block is old enough can not simply be judged by the block height or the subtree size.

l So how to judge whether a certain block in a tree map is old enough ?

Let's deal with the uncontested situation first:

1. For a good guy block, if it has been generated for a long time (beyond the threshold d1), it should be old enough.

2. For a good guy block, if it was generated a short time ago (below the threshold d2), it cannot be old enough.

3. For the bad guy block, if it has been generated for a long time (beyond the threshold d1), it should be old enough.

Of the above two thresholds, d1> d2, so for good blocks in the middle, we can't always accurately judge whether it is old enough. However, for our theoretical analysis, it is sufficient to just distinguish whether it is "obviously old enough".

Attentive friends may have noticed that our classification above does not involve "bad blocks with a short generation time", because there are situations where it is impossible to judge at all. Assume that after a year of operation of the blockchain network, someone releases a block whose father block is the genesis block and does not reference any other blocks. For others, it is impossible to tell whether this block was just generated or released for a year after being stored in the snow. This is indistinguishable in the sense of information theory. Even a quantum supercomputer is useless.

But one thing is certain, such a block is definitely not normal, it must be generated by bad people-good people will not have such a large broadcast delay. According to the common processing method in cryptography, since it is impossible to distinguish, it may be considered as a case, so we can directly treat this type of block as a sufficiently old block, which is the third type classified above. Happening.

So the question is, how do you tell if a block is "obviously" old enough ? This is another problem that can be solved by the Eight Immortals crossing the sea. As for how Conflux is resolved, and listen to the next decomposition.

Note: All the parameters involved in this article are only for the purpose of explaining the principles. The system parameters used in the Conflux test network and main network are subject to the technical specification documents.