Technical Primer | Anatomy of Network Components in Libra

On June 18, 2019, Facebook released the Libra white paper and source code, which aroused widespread concern and discussion in the industry.

Here we analyze the source of Libra and explore the various components of Libra to understand the overall design and implementation of Libra.

Libra core components

Before getting into the subject, let's have a holistic understanding of Libra:


Friends who have read the Libra technical white paper should all remember this picture. Here is a brief introduction to these core components (more detailed discussion will follow):

A. AdmissionControl service: AC for short, which is translated as admission control. It can be understood as Libra's gateway, exposing the interface with the user to the user, such as submitting transactions, obtaining user status, etc

B. Mempool service: store unchained transactions

C. Consensus component: LibraBFT consensus component

D. VirtualMachine component: VM for short, a virtual machine running a Move contract

E. Execution component: The entrance of the VM has been replaced by the Executor component

F. Storage service: store all on-chain data

G. Network component: The above figure implies a Network component. The Network component is required whether the Node is started or communicates with other nodes. In the first main line, we focus on the Network component.

Note that when introducing the core components above, we distinguished between components and services. The difference between the two is that the component has no additional listening port and shares the same port with node, while the service will listen to a single port, usually a GRPC service.

Libra Design and Implementation

Libra involves many things. We introduce the design and implementation of Libra from three lines:

  1. By analyzing the process of Node startup and joining the Libra network, the design and implementation of Network components are introduced;
  2. Around the life cycle of the transaction, analyze the process of receiving transactions, packaging blocks, and running on the chain, and introduce Libra's core components such as Mempool, Executor, and Storage, VM;
  3. Focus on LibraBFT, introduce Consensus components and the process of reaching consensus on blocks

If you want to know or use Libra, we need to start a node and add it to the network. Next, let's take a look at the first main line to understand the startup of Node and the design and implementation of Network.

Node startup process

Let's first look at the general startup process of Node, which mainly includes two parts:

  1. In the module that generates configLibra's Config module, three types of configuration files can be constructed, which are the configuration files of validator, faucet, and fullnode. The faucet configuration is some configuration related to the faucet service, usually only required for the first validator node in the test network. libra-start-1
  2. Start node
  3. libra-start-2

In the figure above, Libra-node is used to start a single node, and Libra-swarm is used to start multiple nodes in batches. Next, let's take a look at some implementation details of these two steps and the previous preparations.

Ready to work

Before we continue, let's prepare the environment we need to depend on

1). Get Libra Code

Git clone

2). Compile and run environment

A. It is recommended to use the script / built-in script to install environment dependencies

B. Or manually install tools such as rust, cargo, git, pb, go, CMake

Generate config

From the previous Node startup process, we learned that starting a node first requires generating a configuration. Libra contains more configuration files. Let's take a look at the configuration files as a whole:


However, if there are no special needs, there are actually not many configurations that require our special attention and attention (see the blue part in the figure above), mainly:

A. The role of Node is divided into Validator and FullNode

B. Generate 3 secret keys and 2 ed25519 algorithms to generate block signatures and network message signatures. Libra provides a generate_keypair tool to generate ed25519 keys (cargo run -p generate_keypair – -o mint.key) , One is generated by the x25519 algorithm, used to identify the node identity

C. Data storage path, temporary path will be generated by default

D. Network_peers: stores the public key and other information of the node in the network, mainly including the public key of the network message signature and the public key of the node identity

E. Seed_peers: information of the node that the current node joined the network to actively connect to

F. Consensus_peers: information of all Validator nodes, Libra network is a permission-shaped network

G. Ports of other services and other configurations, if there are no special requirements, the default is fine

Start Node

  1. Test network connected to Libra sh scripts / cli /
  2. Cargo run -p libra-node or cargo run -p libra-swarm–s

After the current node is started, it will connect to the corresponding node and join the network according to the configuration of seed_peers. If there is no seed_peers, it will start a separate network. Next, let's take a closer look at some of the design and core implementations of Node's Network.

Network components

Core network module

Let's take a look at which modules Network includes:


From above, from bottom to top:

A. MemSocket implements the function of UNIX domain socket and is generally used for testing

B. TcpSocket network connection

C. Transport can be understood as a layer of abstraction of MemSocket and TcpSocket, which encapsulates the operation of the socket

D. Noise is an encryption protocol. The ed25519 private key used for network message signing mentioned above is used here.

E. Rpc is a remote procedure call protocol implemented by Libra itself. The caller will wait for the callee to return the result.

F. DirectSend is literally sent directly, the caller returns immediately after sending, without waiting for the callee to return the result

G. Negotiate can be understood as an abstraction of Rpc and DirectSend

H. MultiStream is used for multiplexing and uses the yamux protocol. The common understanding is that each upper layer protocol is logically encapsulated into a single SubStream on the same Tcp connection, so as to realize the case where multiple upper layer protocols share a Tcp connection. We will mention this later.

The above is an overall implementation of Libra's Network component. Next, we introduce the Libra protocol.

2. Libra's main agreement

Above we have a macro understanding of the Network component. Here we introduce the protocols included in Libra:


In the picture above, from bottom to top:

A. PeerManager encapsulates network connection and multiplexing operations

B. Identity protocol: The x25519 private key mentioned earlier is the Identity protocol used to identify the identity of the current node. The protocol will isolate the Validator network from the Fullnode network according to the Role of the node.

C. Health protocol: periodically randomly select a node to send probe messages

D. Discovery protocol: Each round synchronizes node information from neighboring nodes to discover new nodes, which can be understood as the gossip protocol

E. AdmissionControl protocol: Only the implementation of RPC. After receiving the transaction submitted by the user, the Fullnode node forwards the transaction to the Validator node through the AC protocol.

F. Mempool protocol: only DirectSend implementation, used to synchronize Transaction between different Mempool

G. Consensus protocol: Contains RPC and DirectSend, used to reach consensus between Validator

H. StateSynchronizer protocol: only DirectSend implementation, looking for Block between different nodes

Earlier we mentioned multiplexing. The above protocols all opened SubStreams separately through MultiStream, which logically distinguished the message protocols. Among them, Identity, Health, and Discovery are the basic protocols that all nodes will include, and Consensus is a protocol that only Validator nodes will include.

to sum up

At the beginning, we talked about the Node startup process, and explained the config configuration needs to pay attention to, and the node startup method and process. Then we dived into the Network component and talked about the network's constituent modules and the protocol capabilities provided. We take a single node as an example to summarize the entire startup and joining process as follows:


The yellow part indicates that the SubStream is enabled on the Network port, and the corresponding protocol and protocol processing are added. The green part indicates that the service or component is instantiated. It can be seen that Storage and Executor do not depend on Network. When the Discovery protocol is initialized, the node Will connect to the seed node, and the seed node will verify the Identity. The above is the general process of node startup and joining to the network.

Related Links