View | stateless Ethernet Square: a binary tree experiment status

Author: Igor Mandrigin

Translation: A Sword

Source: Ethereum enthusiasts

What is "Stateless Ethereum"?

If you already know what is "stateless Ethernet Square" and "witness data block", you can skip this paragraph.

To perform transactions and verification block node Ethernet network Square need to know the current status of the entire block chain – that is, balance and store data for all accounts and contracts. This data is generally stored in a DB (database file) and is loaded into a Merkel tree when it is needed for validation.

The working idea of ​​stateless Ethereum client is slightly different. As the name suggests, a stateless client that does not use the hard disk to perform DB block (although the client may also maintain the integrity of the state). In contrast, stateless clients rely on " block witness data "-a special piece of data that will propagate with the corresponding block; with this piece of data, the client can reconstruct one ( A Merkel subtree representing a partial state), this branch is sufficient to execute all transactions in the block.

You can read about a stateless client in this article in more depth: https://blog.ethereum.org/2019/12/30/eth1x-files-state-of-stateless-ethereum/

Of course, the need to propagate block witness data means that the network requirements of stateless clients are higher than ordinary nodes.

-Witness data line chart (block height 5 million to 8.5 million)-

Many ideas have been proposed to reduce the size of witness data: using validity / computational integrity proofs (including but not limited to zero-knowledge proofs), adding more compression methods, and so on. One way is to convert the Ethereum Merkel tree (that is, the Merkel tree used to represent Ethereum) from hex to binary.

This is the problem that this article wants to explore.

Why use a binary tree

A great feature of Merkel trees is that verifying that the root value (that is, a hash value) of the tree is correct does not require you to have all the data of the entire tree. Just replace all omitted non-empty paths with corresponding hash values.

So what's so bad about using hexadecimal Merkel trees?

Imagine the entire tree is filled with data (that is, the nodes on the tree have almost zero children). To verify a block, we only need a small part of the data of the Merkel tree nodes. Then, we only need to replace the data of other paths with hash values ​​(to make this subtree verifiable).

However, each time a hash value is added, the block witness data becomes larger.

If we Merkel into a binary tree, this problem can be alleviated – because each node of the tree Merkel have only two children, so at most only one byte point needs to be replaced with the hash value. (In other words, we can start "cutting" the path of the Merkel tree earlier because our granularity is 1 bit, not 4 bits in hexadecimal.)

Doing so may significantly reduce the size of witness data.

Let's take another example.

Assume that the execution of a certain block will only affect one account: Acc1 in the 3B path (the binary is 0011 1011). The entire tree is full (nodes on the tree have no zero children).

-Comparison of binary state tree and hex state tree-

If the binary state tree looks a bit scary, it's just because I drew all the binary trees, but not all nodes that have been replaced by hash values ​​in the hex tree.

To count:

  • To create a binary state tree, the witness data needs to contain 8 hash values, 7 branch nodes, and 1 account node. That is, the witness has 16 data elements.
  • To create a hexadecimal state tree, we only need 1 branch node and 1 account node, but 30 hash values ​​are needed. That is, there are 32 elements.

Therefore, assuming that the hash value is the same as the space occupied by the branch nodes in the block witness data (in fact, the space occupied by the hash value is a bit larger). In our example, using a binary tree requires The witness data size is only half that in hexadecimal. Looks good.

Well, that's the theory.

Let's see how it works. Let's take a look directly at the data of the Ethereum mainnet.

Start experiment

First of all, the most important thing is: how do we know that the block witness data we built is useful?

The test method is as follows: we use block witness data to generate a Merkel subtree, run all transactions in the corresponding block on this tree, and then verify that the results are consistent with what we know. As long as the transaction can be successfully executed (gas is sufficient, they have the same traces, etc.), we can conclude that this witness is sufficient.

-Test method: 1. execute the block; 2. extract the witness data from the state tree; 3. use the witness data to construct a subtree; 4. disable DB access and use the subtree to execute the block (see github for details) – Second, we need some benchmark data (for comparison). Therefore, we also use 5 million to 8.5 million height blocks to generate witness data in the hexadecimal Merkel tree mode, and store the statistics of the size of the witness data in a super large csv file.

The first step we tried was to assemble a hexadecimal tree after executing a block, then convert it into a binary tree, and then extract witness data from this binary tree.

This method has several advantages: it is easy to implement, and it is simple to verify the hex-binary conversion.

However, we encountered two problems, and one of them was not small.

The first one, as we proved above, is that the hex tree contains more account nodes than the binary tree. If we first generate the hex tree and then convert it, the result will follow the binary tree mode. The directly generated witness data is different.

why?

Because the hexadecimal tree data always grows at a rate of 1/2 byte, and the binary tree always grows at a rate of 1 bit, the length of the key can be an odd number of bits.

In fact, the witness data also contains some additional extension node (EXTENSION node), they are also slightly larger. However, even for blocks with a lot of content (including about 5000 transactions), the difference in the size of the witness data is very small (less than 5%).

The key is performance. As the size of the tree grows, the speed of conversion will become slower and slower.

Let's use a more specific number to explain: on our Google Compute Engine virtual machine, the processing speed is about 0.16 blocks per second, that is, processing less than 10 blocks per minute, and processing 1 million blocks is more than 3 Months!

So, we decided to take a more complicated approach and develop an experimental branch that natively uses binary Merkel trees. In other words, we need to turbo-geth code base for all cases to state tree hex replace all binary tree, then the block is executed based on a binary tree.

The disadvantage of this method is that part of the hash value verification can only be ignored (block root hash, sometimes the account stores the tree root hash, because our blockchain snapshot mechanism is lacking).

But the main verification mechanism is the same: we need to be able to use binary trees to execute blocks and create Merkel subtrees from witness data.

Let's talk about keys again.

For the sake of simplicity, our encoding of the key is very inefficient: 1 byte per nibble; each bit of a key occupies 1 byte. This greatly simplifies code-level changes, but the "key" part of the block witness data will be 8 times larger than when we used the bitset.

Therefore, in further analysis, I will assume that the key is encoded optimally (using 1 bit to encode 1 bit of information).

Hex vs. Bin: Results

My analysis is divided into two sections, a total of two million blocks were analyzed Square, the main Ethernet network.

Block height of 5 million to 6.5 million

I have provided a command line in this github library to repeat this experiment using a python script: https://github.com/mandrigin/ethereum-mainnet-bin-tries-data

First let's analyze the data set.

python percentile . py hex - witness - raw . csv bin - stats - 5 m - 6.5 m . csv 5000000 6500000 adjust 

-A box chart , the box displays the data between the upper quartile and the lower quartile, and the lines extending left and right show the data between 1% and 1% (that is, exclude extreme large and extremely small 1 % Data) 1 -Percent analysis (in MB)- Now we can generate some cool charts!

 python xy - scatter - plot . py hex - witness - raw . csv bin - stats - 5 m - 6.5 m . csv 5000000 6500000 adjust 

-XY scatter plot (it has been assumed that the key is coded optimally) (the horizontal axis is the witness data size under Hex, and the vertical axis is the witness data size under Bin)-

It can be seen that the size of the binary witness data is always better than the witness data under the hexadecimal tree.

Let's add another parameter. Divide the size of the binary witness data by the size of the hex witness data to see how we have improved.

 python size - improvements - plot . py hex - witness - raw . csv bin - stats - 5 m - 6.5 m . csv 5000000 6500000 adjust 

-Size of binary witness data / hex witness data (smaller value is better)-

To better understand this icon, we also output the mean and percentile values.

Average = 0.51 (49% space savings on average)

P95 = 0.58 (in order of savings, at least 42% of space is saved for the first 95% of the data)

P99 = 0.61 (in order of savings, for 99% of data, at least 39% of space is saved)

What does this mean in a real scenario?

For 99% of the blocks, the size of the witness data can be reduced by at least 39% .

For 95% of the blocks, the size of the witness data can be reduced by at least 42% .

On average, witness data can save 49%.

We also need to consider the absolute value of the witness data size. To make the data readable, we take a moving average every 1024 blocks.

 python absolute - values - plot . py hex - witness - raw . csv bin - stats - 5 m - 6.5 m . csv 5000000 6500000 adjust 

-Line chart of witness data size in chronological order, with the unit of the vertical axis being MB-

Let's take a look at the latest block situation.

Block height of 8 million to 8.5 million

 python percentile . py hex - witness - raw . csv bin - stats - 8 m - 9 m . csv 8000000 8500000 adjust 

-Box diagram, where boxes represent data within the upper and lower quartiles, and lines represent data within 1% of the upper and lower levels- 2 -Percentile analysis of blocks 8 million to 8.5 million- There are also XY scatter plots.

 python xy - scatter - plot . py hex - witness - raw . csv bin - stats - 8 m - 9 m . csv 8000000 8500000 adjust 

There are also economies of scale.

 python size-improvements-plot.py hex-witness-raw.csv bin-stats-8m-9m.csv 8000000 8500000 adjust 

-XY scatter plot (it has been assumed that the key is coded optimally) (the horizontal axis is the witness data size under Hex, and the vertical axis is the witness data size under Bin)- Mean = 0.53 (47% space savings on average)

P95 = 0.61 (in order of savings, at least 37% of space is saved for the first 95% of the data)

P99 = 0.66 (in order of savings, at least 34% of space is saved for 99% of the data)

Finally, let's look at the absolute size of the witness data.

 python absolute - values - plot . py hex - witness - raw . csv bin - stats - 8 m - 9 m . csv 8000000 8500000 adjust 

-Line chart of witness data size in chronological order, with the unit of the vertical axis being MB-

in conclusion

After testing with Ethereum mainnet data, we can see that switching to binary tree mode can greatly improve the efficiency of generating witness data (the witness data size is reduced by 47-49% on average).

Another conclusion is that this improvement is not as significant in theory. The reason may be the actual data of the mainnet block (more complicated than expected).

Perhaps, by analyzing some exceptions (that is, the cases where the binary tree gains the least improvement), we can know more ways to optimize the size of the witness data.

Try running the scripts in GitHub with other raw data: https://github.com/mandrigin/ethereum-mainnet-bin-tries-data

(Finish)

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Blockchain

Eat Reason Rationally | Who is the real winner of the Steem incident?

Sun Yuchen VS Steem community was the most discussed topic in the industry last week. On February 14, Sun Yuchen comp...

Blockchain

Did Michael Lewis's reputation 'crash' by showing mercy to SBF?

Source New York Times Compiled by LianGuaiBitpushNews Mary Liu Hiking is a common interview format for Michael Lewis....

Blockchain

The cryptocurrency exchange "closed tide", running to catch up with P2P

The cryptocurrency exchange, once regarded as “stable and not paying”, is more crazy than other fields,...

Blockchain

FTX's new CEO: FTX has been lying to banks about its mixed funds issue

FTX's new CEO claims that as early as 2020, banks had inquired about suspicious fund flows.

Blockchain

Getting Started | What is an aggregate transaction? What are the operating principles and advantages?

Recently, the concept of aggregate transactions has been repeatedly mentioned, how is aggregated trading realized? Wh...

Blockchain

Number reading | The paradox behind the 109 reports The truth is that IEO is an antidote or a poison?

After several months of fermentation, the IEO boom continues, and there is even a wave of higher waves. Yesterday (Ap...