Paxos, PoW, VDF: a beautiful gold line

Consensus issues are one of the fundamental problems of distributed systems.

Although Consensus and Consistency are considered to be approximately equivalent and interchangeable in many literature and application scenarios, the meanings of the two are fundamentally different.

This difference is closely related to the exact concept of the distributed system and the concept of the cluster. In view of the fuzzy use of the distributed system in different contexts and the need to clearly explain the blockchain system, here is a summary of the Turing Award winner Jim Gray. Explain as follows:

What is the difference between a cluster and a distributed system? A cluster seems to be just a distributed computing system, like a network of the World Wide Web or a Solaris system, or… Of course, a cluster is a simplified distributed system. However, these differences are sufficient to make the clustering algorithm significantly different.
  1. Homogeneity: A cluster is a homogeneous system. System nodes have the same security policy, the same audit policy, the same naming scheme, and may run the same brand of processor and operating system. The speed and version of software and hardware between different nodes may be different, but they are all very similar. A distributed system is a computer zoo – made up of many different kinds of computers.
  2. Locality: All nodes of the cluster are in a nearby area and connected via a high-speed local network. Because the cluster has modern hardware and software, it has a high bandwidth. Bandwidth is cheap because it is not the bandwidth of a rental telecommunications company. The cluster is reliable because it is in a controlled environment. And the cluster is efficient because it can use a protocol stack optimized for local communication. Communication in distributed systems is relatively slow, unreliable, and expensive.
  3. Trust: Nodes in a cluster trust each other. They authenticate each other, share the load, provide failover to each other, and typically act as a federated node. A distributed system is usually a self-contained collection of nodes that are mutually suspect.

Defining the difference between distributed systems and cluster systems can better understand the significance of consensus at different stages of development.

The research results of Yuan Yong et al. believe that the consensus research focuses on the process and algorithm of the distributed system nodes, while the consistency study focuses on the stable state finally reached by the node consensus process. In addition, most of the traditional distributed consistency studies do not consider the Byzantine fault tolerance problem, that is, there is no Byzantine node for malicious tampering and forgery data. Therefore, for a long time, the application scenarios of traditional distributed consistency algorithms are mostly limited. And relatively reliable distributed database environment.

Byzantine General: Byzantium is located in Istanbul, Turkey, and is the capital of the Eastern Roman Empire. Because of the vast territory of the Byzantine Roman Empire at that time, in order to defend against the enemy, each army was separated far away, and the generals and the generals could only rely on the messenger to transmit news. During the war, all the generals in the Byzantine army must reach a consensus and find that there is a chance to win before attacking the enemy camp, but some generals may be traitors, and they will deliberately send out false messages to try to disrupt others. How can loyal generals reach an agreement if they are known to have traitors. This is the problem of General Byzantine.

That is to say, consistency is more suitable for cluster systems, and the consensus is suitable for real distributed systems.

Clusters and distributed systems have traditionally relied on a fundamental component, which can be called network communication or messaging. In the biblical textbook "Distributed Systems: Concepts and Design" in the field of distributed systems, this defines a distributed system:

A distributed system is a system in which a networked computer coordinates its behavior through messaging.

Before the PoW consensus mechanism was proposed, messaging or network communication has always been the attribute of distributed systems. This attributive first pays tribute to the development of computer networks, and secondly illustrates the characteristics and difficulties of distributed systems as asynchronous systems.

This article mainly describes:

  • Cluster and distributed systems
  • Consensus and consistency
  • Agreements and mechanisms in consensus
  • Machine consensus and consensus
  • Theoretical solution and engineering solution of distributed systems
  • Byzantine General and Fault Tolerance
  • Paxos thirty years: non-Byzantine fault tolerance consensus
  • PoW Decade: Byzantine Fault Tolerance Consensus
  • The concept and principle of VDF
  • VDF application

I. Paxos Thirty Years: Non-Byzantine Fault Tolerance Consensus

The history of computer development is more or less directly linked to the history of computer networks.

In the 1970s, the development of computer networks directly led to the division of supercomputing and parallel computing: the end of the time-sharing calculation of the mainframe in the 1960s, which led to the development of distributed systems. The deep power behind this divide is two well-known laws in the field of computer architecture: Moore's Law, which was published in 1965, and Amdahl's law, which was proposed in 1967.

The initial form of a distributed system is clustering. The first commercial cluster product was developed by Datapoint in 1977, but it did not achieve commercial success until the digital device company DEC released its VAXcluster product for the VAX / VMS operating system in 1984, and the cluster itself took off. At the same time, any technology wave has laid the groundwork for the future in the humble technical field: the first personal computer Altair 8800 was launched in 1975, and in 1976, Jobs and Wozniak designed the apple, and The Apple II was introduced in the year.

The basic theory of computer networks gave birth to the original germ of the distributed system. In 1975, Akkoyunlu EA, Ekanadham K., and Huber RV at the State University of New York at Stony Brook in the paper "Some constraints and tradeoffs in the design of network communications For the first time, the two military issues in the computer field and their unsolvable proofs were proposed [7]. The well-known database expert and figure winner JimGray officially named the issue "the two military issues." The problems of the two armies indicate that it is impossible to reach a consensus on communication through unreliable communication links, which is considered to be the first problem in computer communication research that proves to be unsolvable. The two military issues have had an important impact on computer communication research. The "three-way handshake" process in the most important TCP/IP protocol to date is simple and easy to implement, and the cost can be solved to solve the problem of the two military issues. Controlled "engineering solution."

The development of network technology has brought great confidence and research motivation to computer scientists, which also made the late 1970s and 1980s the golden age of basic theoretical research on distributed systems.

During this period, four theoretical results that are still shining today are born:

  1. In 1978, Leslie Lamport published "Time, Clocks, and the Ordering of Events in a Distributed System". For the first time, causality was used to define the timing problems of distributed systems, laying a fundamental foundation for messaging distributed systems. The time problem of distributed systems has been studied for decades, including Google's atomic clock, and now, from VDF, time-related issues remain a fundamental problem.
  2. In 1982, Leslie Lamport published The Byzantine Generals Problem, also known as the Byzantine General. In a distributed system of messaging, a node may send an error and send an erroneous message. The communication link between the nodes may also cause message corruption, which causes nodes in the system to draw different conclusions about the strategy of all cooperation, thereby destroying system consistency. . The Byzantine general problem is considered to be one of the most difficult types of fault-tolerant problems. Since then, distributed consensus algorithms have been divided into two categories, namely, Byzantine fault-tolerant consensus and non-Byzantine fault-tolerant consensus. However, the reason why this article does not focus on this article is that the article only raises the question and does not solve the problem until the emergence of the paxos algorithm (solving the non-Byzantine fault-tolerant consensus) and the emergence of the workload proof (PoW) (really solved). The Byzantine General question). The proof of workload is the key to solving the Byzantine problem in the Bitcoin system, and it is of great significance to the operating state of the entire system.
  3. In 1985, Fischer, Lynch and Patterson published "Impossibility of Distributed Consensus with One Faulty Process." According to the first letter of the author's last name, this article was handed down with FLP Impossibility (FLP Impossibility Theorem), which is one of the classic conclusions in the field of distributed systems, and thus won the Dijkstra Award. In short: In an asynchronous system with multiple deterministic processes, as long as one process may fail, there is no protocol that guarantees that all processes will agree for a limited time. The FLP Impossibility Theorem also points out that there is no finite time theoretical solution for the consensus problem of asynchronous systems with faulty processes, so it is necessary to find a feasible "engineering solution". To this end, researchers can only circumvent the FLP impossible theorem by adjusting the problem model, such as sacrificing certainty, increasing time (the problem of timeout value when designing a distributed system).
  4. In 1989, Leslie Lamport published the Distributed System Consistency Protocol: The Part-time Parliament. For distributed protocols, the history of Paxos is relatively rich and detailed, which is described in detail later. In summary, Paxos gives an algorithm for achieving distributed system consistency using message passing in a synchronous system with non-Byzantine failures. With the deepening of the consensus research of distributed systems, Paxos algorithm has been widely recognized by academia and industry, and derived variant algorithms such as Abstract paxos, Classic paxos, Byzantine paxos and Disk paxos, which is the most important to solve the consensus problem of asynchronous systems. Algorithm family. Lamport also proposes the concept of “lease” in this paper, which allocates resources to the system for a limited time so that resources can be recycled and reused even when the system stops working.

Leslie Lamport discovered the Paxos algorithm in the late 1980s, then he wrote a paper and submitted it to Transactions on Computer Systems (TOCS). Because of the successful precedent of the Byzantine General (using fables or metaphors to describe complex problems), Lamport converted this important algorithm into a parliament on the ancient Greek islands. This is the metaphor of the part-time council:

At the beginning of the 10th century, the island of Paxos on the Aegean Sea was a thriving commercial center. Wealth has led to political complications, and citizens of Paxos have adopted a parliamentary government instead of ancient theocracy. But business is above the civic duty, and in Paxos, no one wants to put his life into the parliament. The Paxos Council needs to keep its work while parliamentarians are continually withdrawing or joining the parliament. The problems faced by part-time parliaments are very similar to today's fault-tolerant distributed systems. Members correspond to processes, and parliament leaves the conference to correspond to failures (failures) in distributed systems. Therefore, the Paxos solution may have some reference to computer science.

Lamport tried to add some humor to the subject, but ended up with a frustrating defeat. People who have heard of the Lamport lectures think of stories like "Raiders of the Lost Ark" but don't remember the algorithm. The people who reviewed the papers were obviously upset by this Greek fable, and they could not understand the algorithm.

All three judges said that this paper is a bit interesting, although not very important, but all Paxos things must be deleted. Lamport was annoyed that everyone working in the field seemed so humour-free, so he declined to make any changes to the paper.

Lamport sent this article to Nancy Lynch, Vassos Hadzilacos, and Phil Bernstein, who claimed to have read this. article. A few months later, Lamport sent them an email asking the following questions:

Can you implement a distributed database that can tolerate the failure of any number of processes (perhaps all processes) without losing consistency, and return to normal behavior when more than half of the processes are working again?

But they did not notice any connection between this issue and the Paxos algorithm.

However, the agreement began to spread verbally and became popular with a much more formal (and much more verbose) paper written by De Prico, Lynch and Lampson. Lamport resubmitted the paper about eight years after the initial attempt, and the TOCS published the paper in its original form on the second request.

This is an interesting and welcome change compared to conventional theoretical computer science papers.

Despite this, people still find it difficult to understand. When celebrities like Nancy Lynch have difficulty reading this paper, it is more difficult for ordinary people to grasp the content of the article. In the end, Lamport published Paxos Made Simple, a short paper whose tone masked the author's disappointment that Paxos' strategy was not fully effective:

"The Paxos algorithm, when presented in plain English, is very simple." "Paxos algorithm, expressed in plain English, very simple."

Since then, Paxos has become widely known, in part because Google has become popular as a core part of its Chubby system. As explained in Google's Paxos Made Live, it is difficult to properly use the Paxos protocol in engineering: engineering implementations make some changes to the basic protocol, the most important of which may be multi-Paxos, which allows the request to be linked. Together to reduce the complexity of the message.

Many of the readers are already familiar with the following things. Various Internet companies have fully customized and promoted the paxos algorithm. Interested readers can get a review from the "Paxos Consensus Algorithm Research Progress" and "Byzantine System Technology Research Review". Overall view. The following is a brief review of Paxos derived articles:

  1. Paxos Made Simple: Presenting Paxos in a whole new way, short and very easy to read.
  2. Paxos Made Live: This paper from Google bridges the gap between theoretical algorithms and work systems. There are a number of practical issues to consider when implementing Paxos that may not be possible. If you want to build a system using Paxos, you should read this article first.
  3. Cheap Paxos and Fast Paxos: Two papers have optimized the original protocol.
  4. Consensus on Transaction Commit: A short paper by Lamport and Jim Gray shows that 2PC is a simplified version of Paxos that can tolerate zero failures. This article is a readable introduction to Paxos.
  5. How To Build a Highly Available System Using Consensus: Butler Lampson demonstrates how to use the Paxos consensus as part of a larger system.

The theory is ahead of the times, but it needs to wait for the inspection of the project. After the large-scale application of the Internet, Lamport finally won the Turing Award in 2013. The development of distributed systems theory in the 1980s touched on a future that was still unimaginable at the time, and just as we were on the eve of another dawn.

Second, PoW Decade: Byzantine Fault Tolerance Consensus

The article "The Academic Pedigree of Bitcoin" evaluates Bitcoin in this way: almost all of Bitcoin's technical components originated in the academic literature of the 1980s and 1990s. This is not to weaken Nakamoto's achievements, but to point out that he is standing on the shoulders of giants.

Bitcoin has been running stably for 10 years, and its mechanisms and principles have become popular with its economic effects. If you ask blockchain practitioners about technical issues with Bitcoin, everyone may agree on the role and significance of economic incentives and proof of effort.

Introducing economic incentives and proof of work, Bitcoin solves the problem of Byzantine generals in the open system – a conclusion that even people familiar with distributed systems agree, but Nakamoto did not quote any Byzantine literature in the original white paper. No related terms were used. This is in stark contrast to the literature he explicitly stated with reference to chain time stamping, including proof of workload. He used some concepts and protocols to describe the consensus mechanism and considered the problem in the form of an attacker and how the nodes joined and left the network. Only in the mailing list on Bitcoin and Byzantine General (a thought experiment), Nakamoto said that the workload proved to solve this Byzantine general problem.

Studying the academic pedigree of Bitcoin, we can see the dilemma faced by researchers in each field: the researchers of the Paxos protocol did not consider the incentive problem of nodes; the original digital currency (including hashcash, b-money and bit gold) did not Consider a consensus algorithm to solve the double flower problem. That is to say, there is a double-ring dilemma between the node's incentive and the double-flower problem of digital currency: the node needs digital currency incentives to suppress evil; the digital currency needs nodes to reach consensus to avoid double flowers. For this dilemma, the workload proves that PoW may provide a bridge, but there is still a double-loop dilemma: the digital currency as an economic mechanism already exists, but it has not been converted into an incentive mechanism; the proof of work (PoW) as an anti-spam measure is due to Lack of incentives leads to insufficient motivation for use.

This is a contradictory embarrassment: no one supports you unless you have succeeded, but if no one supports it, how can you succeed?

The genius of Nakamoto is based on the logic that the mandatory workload is proof of money. The consensus and incentive logic of Bitcoin is this: the proof of work is money, which encourages the miners to work hard to provide proof of the workload, and then obtain the currency; at the same time, using the principles of economics to set rules, so that the investment of malicious nodes is greater than the benefit, If the malicious node has no power to destroy the consensus, it can solve the problem that the general Byzantine problem cannot be reached because of the general's rebellion, and thus protect the double-flower problem of the digital currency.

Bitcoin based PoW consensus algorithm is the earliest and the most secure and reliable public chain consensus algorithm. The consensus idea of ​​PoW is that system nodes (ie miners) work together to solve a complex but validated SHA256 math problem (ie mining) based on their respective computer powers. The fastest solution to this problem will be the next step. The block right of the block and the bitcoin reward automatically generated by the system. From a distributed system perspective, there are two major differences between the PoW consensus algorithm and the Paxos-like consensus algorithm:

  1. The competitive block right is a leader election process. The Paxos consensus algorithm, including the leader election of the PBFT algorithm, requires multiple communication between nodes. The leader election of the PoW consensus algorithm is a bootstrap process, competing for the block right itself. Nodes do not need to communicate with each other (of course, the disadvantages are time delays and wasted resources).
  2. The computational power competition process of the PoW consensus algorithm participating nodes is performed in parallel without coordination nodes, and the Paxos consensus process is synchronous, and generally requires a coordination node.

In short, in terms of machine consensus, some PBFT and Paxos consensus algorithms used in the blockchain, as well as various consensus algorithms such as elections, equity proofs, random classes, alliances, and hybrids in L1, although Some smaller alliance chain scenarios can work, but compared to the central idea of ​​Bitcoin, one cannot but say a retrogression.

Historical Evolution of Blockchain Consensus Algorithm

On the discussion of the PoW consensus mechanism and the Byzantine general, Yuan Yong and others pointed out that the consensus problem of the existing literature research can be divided into two branches: algorithm consensus and decision consensus. The former is devoted to researching specific network models and fault models. Under the premise, how to ensure consistency in a distributed network lacking central control and coordination is essentially a “machine consensus”; the latter is more extensively researching how to make optimal decisions in non-central group decisions. Consensus issues, such as community discussion and route selection on the issue of bitcoin system expansion and fork problems, are essentially "human consensus." The difference between the two is that the former is a deterministic consensus between machines, mainly based on engineering complexity, while the latter is characterized by a complex system of “Human-in-the-loop”. Deterministic consensus is based on social complexity. Blockchain consensus algorithm research should belong to a subset of algorithm consensus branches, and decision consensus is mostly found in distributed artificial intelligence, multi-agent and other research fields.

The Byzantine general problem is the basis of distributed consensus and the root of the two research branches. The Byzantine General question has two conditions of interaction consistency, consistency and correctness. Since in most cases, correctness involves the subjective value judgment of human beings, it is difficult to apply to the nodes of the distributed system. Therefore, the algorithm consensus adopts "Degraded correctness", that is, from "the content of the expression is The correct "downgrading to "correctly expressed", which leads to the Byzantine consensus of the blockchain is actually a machine consensus, which itself is equivalent to distributed consistency + correct expression (without tampering with messages). Yes, decision-making consensus can be considered as a consensus of people, not only requiring consistency, but also requiring all nodes to believe that “the content of the expression is correct”, so the decision-making consensus not only requires objective consistency of content, but also requires it to be between consensus nodes. Subjective correctness. It can be seen that the algorithm consensus deals with the objective binary consensus, that is, (the only correct book) and the wrong (all wrong accounts), and the decision consensus deals with the subjective multi-value consensus, that is, the opinion 1 (and its group), opinion 2 (and its group), ···, opinion N (and its group), each node ultimately through the coordination and cooperation between groups The process converges to a unique opinion (consensus), and this process may fail (not converge).

Third, VDF: Leading the future

According to the BBC Chinese website reported on August 1, 2015: The Serbian national lottery lottery process has made a surprising mistake, prompting the police to intervene in the investigation.

In the usual live TV lottery, whenever the lotter shakes a ball with a number, the number will be displayed on the TV screen to confirm. In the lottery on Tuesday (July 28), when the lottery shakes the 27th ball, the TV screen incorrectly displays the 21st. However, the ball that the lottery shakes next is the 21st. The TV screen accurately predicted the lottery number earlier than the lottery machine, which triggered speculation and prompted the Serbian national lottery officer to resign. The Serbian police have investigated the incident and seized the lottery machine and related computer software.

Lottery has caused fraudulent speculation. The gambling in the game or the game depends on people's craftsmanship. The random number in the computer depends on a safe random source. These are common examples in life. From public gatherings, national lotteries to contest draws, a fundamental requirement is how to convince others that you are not cheating? The fundamental requirement is how to generate secure random numbers.

The current high-reliability random number generation scheme is the National Institute of Standards and Technology's NIST Randomness Beacon and random.org (both are centralized random sources). NIST uses photon and quantum mechanics laws (two-photon entanglement) to generate random numbers, and random.org uses atmospheric noise as the source of entropy. These random numbers are also referred to as "true" random.

In the digital world, a blockchain represented by bitcoin is also used as a random source (a decentralized random source). These random sources have enough randomness (70+ bits, more strictly, minimum entropy), but the blockchain is an open network. When the interest corresponding to the random number is large enough, the blockchain will be affected by the block. Block suppression attacks and bribery attacks.

In order to address the attack problem of block source random sources, a delay function was introduced. That is to say, by introducing a delay function, this function takes 1 hour (for example) to calculate the result, so that the calculation result of the delay function can be obtained only after 1 hour (the random source on the blockchain is used as a parameter). This makes the blockchain a safe decentralized random source.

The concept of verifiable delay function VDF was first proposed by Dan Boneh, professor of computer science and electrical engineering at Stanford University and co-director of the Computer Security Laboratory, in the paper "Verifiable Delay Functions" published by Crypto 2018. Although there have been many studies before, including proof of sequence work, Sloth: slow-timed has, etc., VDF has attracted great interest from academics and industry. Within 10 days of the publication of the VDF paper, two VDF construction papers were released, and the next month Dan Boneh gave a comprehensive review of the two structures. So far, the main research on VDF construction is as follows according to the timeline.

2019—Döttling, Garg, Malavolta, Vasudevan Tight Verifiable Delay Functions

2019—Ephraim, Freitag, Komargodski, Pass “Continuous Verifiable Delay Functions”

2019—De Feo, Masson, Petit, “Sanso Verifiable delay functions from supersingular isogenies and pairings”

2018 (30 July)—Boneh, Bünz, Fisch “A Survey of Two Verifiable Delay Functions”

2018 (22 June)—Pietrzak “Simple Verifiable Delay Functions”

2018 (20 June)—Wesolowski Efficient Verifiable Delay Functions

2018 (12 June)—Boneh, Bonneau, Bünz, Fisch Verifiable Delay Functions

2015—Lenstra, Wesolowski “A Random Zoo: Sloth, Unicorn, and Trx”

In the industry, blockchain projects of interest to VDF are:

Among them, Filecoin and Ethereum announced the establishment of a joint laboratory to jointly study the constructor of VDF. They wrote in the statement: Perhaps more importantly, designing and developing a secure and usable VDF construct would be a major breakthrough in the application of cryptography and distributed systems, and its applicability even surpassed the blockchain.

And the protocol lab and the Ethereum Foundation jointly funded 50/50 projects to establish two VDF-related projects: supra national and ligero.

The VDF model is very simple and very significant.

VDF is an abbreviation of Verifiable Delay Function. Function indicates that VDF is a function that has a unique output for each input. Delay can be represented by a time T (wall time), which is calculated at time T, but cannot be calculated by parallel acceleration at less than time T. Verifiable requires that the output of the delay function be very easy to verify.

Wall time, also known as real-world time or wall clock time, is the elapsed time determined by a timer such as a clock or wall clock. (The word "wall" was originally named after a reference to the wall clock.)

For the function of VDF, namely "delay against parallel acceleration" and "verifiable results", VDF must contain an algorithm for calculating the result and an algorithm for verifying the result. At the same time, such cryptographic tools usually include a setup phase to determine the parameters that need to be used later. Therefore, VDF is described as a triple of three algorithms (Setup, Eval, Verify).

Each algorithm is defined as follows:

  1. Setup(λ,T)→pp=(ek,vk) Accepts the safety parameter λ and the time parameter T to generate a common parameter that can be seen by everyone. The public parameters contain a parameter ek for calculation and a parameter vk for verification.
  2. Eval(ek,x)→(y,π) accepts the calculation parameter ek and the input x∈X, and calculates the output y∈Y and the proof π.
  3. Verify(vk,x,y,π)→{accept,reject} accepts vk,x,y, and π, and outputs true (verification passed) or false (verification failed).

The flow of the VDF is shown in the following figure.

The most important point of the above VDF process is that a large amount of computing resources are required in the calculation, but only a relatively small amount of computing resources are required for verification. Such an asymmetrical relationship of computation and verification looks a bit like a proof of work (PoW). The criticism came one after another: "It sounds like we are back to the proof of the workload." "We can't do this without burning a CPU." – The document "VDF is not proof of workload" states: Although VDF and The traditional PoW algorithm has such attributes as “hard to calculate” and “easy to verify”. The core difference is that the blockchain workload proves that the consensus algorithm is parallelizable workload proof, and only has a probability to succeed, not a kind. function. In contrast, VDF is a continuous workload proof and a definite function.

Regarding the difference between VDF and PoW, Qiu Feijun pointed out in the article "Understanding Verifiable Delay Function VDF" that PoW is directly defective as a source of random numbers, and VDF cannot directly replace PoW. But this does not mean that VDF can not be used in the consensus agreement. The reasons are as follows:

  1. PoW does not resist parallel computing acceleration and VDF resistance. In fact, PoW does not resist parallel computing acceleration is in line with Nakamoto's "one CPU, one vote" idea, and the nature of anti-parallel acceleration will only run counter to this purpose. VDF has almost no advantage over CPU-savvy calculators and single-CPU calculators.
  2. For a fixed difficulty setting d, PoW can have many legal solutions, which is also a prerequisite for ensuring that the PoW consensus network has stable throughput and stimulating miners to compete. For a given input x, the VDF has a unique output (which is why it is called a function).

VDF has a wide range of applications, including enhancing the security of publicly verifiable random numbers, solving Nothing-at-stake Attack, and alleviating Long-range Attack. These can be found in Qiu Feijun's article "Understanding Verifiable Delay Function VDF". . This paper focuses on the application of VDF ideas in the expectation of consensus, proof of replication and proof of space and time.

The proof of copying is to prove that a unique copy of the data is stored on a server, which is different from the proof of storage. Ben Fisch pointed out in BPASE '18 that the Impossibility of Strongest Replication Definition cannot be achieved through cryptography, and then by introducing the concept of “rational enemies” and designing incentives to make the evil gains very low, and arbitrary The cost of switching to a “good” strategy at all times is very low, so there is no motivation to do evil. One idea of ​​copy proof is to use VDF's time-asymmetric coding scheme (that is, the coding is slow, but the decoding is fast). As shown in the following figure, if the server deletes the data, it will not be able to quickly respond to the challenge.

The basic idea of ​​space-time proof is to divide the space-time proof into multiple EPOCHS periods, each period must accept the challenge from a random beacon that outputs a challenge at regular intervals (for Filecoin, the blockchain acts as a random beacon ). The process of proof of time and space generation is as shown in the pseudo code below.

 - *Step 1*: Generate `POST_EPOCHS` proofs: - `mix = challenge_seed` - `challenge_stream = NewChallengeStream(PublicParams)` - Repeat `POST_EPOCHS` times: - `(challenges, challenged_sectors) = challenge_stream(mix)` - Generate proof: `porep_proof = OnlinePoRep.prove(challenges, challenged_sectors, commR, replica)` - append `porep_proof` to `porep_proofs[]` - Add `porep_proof` to `porep_proofs` - Slow challenge generation from previous proof `porep_proof`: - Run VDF and generate a proof - `x = ExtractVDFInput(porep_proof))` - `y, vdf_proof = VDF.eval(x)` - Add `vdf_proof` to `vdf_proofs` - Add `y` to `ys` - `mix = y` - Step 3: Output `porep_proofs`, `vdf_proofs`, `ys` 

The verification process of space-time proof is shown in the following pseudo code.

 - *VDF Output Verification*
 - For `i` in `0..POST_EPOCHS-1`
 - assert: `VDF.verify(pp_vdf, ExtractVDFInput(porep_proofs[i]), ys[i], vdf_proofs[i])`
- *Sequential Online PoRep Verification*
 - assert: `OnlinePoRep.verify(commR, challenges_0, porep_proofs[0])`
 - for `i` in `1..POST_EPOCHS`
 - Generate challenges `for j in 0..CHALLENGE_COUNT: challenges[j] = H(H(ys[i-1])|j)` `
 - assert: `OnlinePoRep.verify(commR, challenges, porep_proofs[i])` 

VDF hardware can speed up the operation of VDF. In each "EPOCHS", a small fraction of the acceleration gain will result in a large time difference between the "total verification time" between the fastest and slowest verifiers. Filecoin refers to the difference between the fastest proof time and the average proof as the "VDF acceleration gap" and defines the VDF acceleration gap as (0-1). Since each cycle of space-time certification must accept challenges from random beacons, faster verifiers can accelerate the “VDF acceleration gap” during each EPOCHS, but not the “VDF acceleration gap” of the entire space-time proof. In other words, the fastest verifier cannot accumulate revenue during each EPOCHS because they have to wait for new challenges from random beacons.

The consensus agreement is a probabilistic Byzantine fault-tolerant agreement. At a high level, it works by conducting a leader node election every round, and the mathematical expectation of a leader node to submit a block number is 1. Based on the proof of time and space, it can be proved that the miner has X% of the total resources of the system, so the mathematical expectation of the miners with X% resources to dig up the number of blocks in any block cycle is X%. The X% resource is then divided into X shares, 1% each, for x1,…, xn different values. Then perform an independent random test on each X:

  1. Ri = HASH(xi) (0,N)
  2. The miner found R=Min(R1,…,Rn)

Then, an unpredictable challenge random number is obtained by x and the previous block, and the VDF calculation is performed by using the random number and the delay time T proportional to R as parameters.

Through examples of proof of replication, proof of time and space, and expectations of consensus, VDF demonstrates a fundamental, viable future of resource-based proof and latency-based consensus protocols, which will lay the ground for secure random numbers and more provable applications and algorithms. Foundation, as Boneh concluded in his groundbreaking paper "Verifiable Delay Functions":

Given its many interesting applications, we hope that this work will stimulate the new practical application of VDF and continue to study its theoretical construction. We still lack a theoretically optimal VDF, which consists of a simple intrinsic sequence function that requires a lower degree of parallelism to compute, but the validation is very fast (eg, logarithm). These requirements have spurred the exploration of new problems that have not traditionally been used in cryptography. Ideally, we hope that VDF is also post-quantum safe.

Fourth, summary

 

The Byzantine General and the Paxos algorithm have been proposed for nearly 40 years, and the real large-scale application has been the past 10 years. Bitcoin has been in existence for 10 years since its introduction in 2008, and its consensus mechanism PoW algorithm has been running stably for more than 10 years. The VDF function mechanism was proposed on June 12, 2018. It is just one year until June 12, 2018. From Paxos to PoW to VDF, the theoretical development of distributed systems has drawn a beautiful golden line. Behind this golden line is the basic problem of computer systems: time and space. When Newton thought that he understood all of this, Einstein smiled; when Einstein thought that he understood all of this, Heisenberg and Schrödinger laughed; when humans began to think about time and space, God smiled.

The introduction of VDF is a major technological breakthrough in the field of application of cryptography, distributed systems and blockchains, laying the groundwork for the feasibility of blockchain resource consensus, space-time certification, and leader election. Not only that, but as a kind of delay service and random beacon service, it can also be applied to other fields. Compared to Paxos, VDF retains the characteristics of the blockchain open system and supports Byzantine fault tolerance. Compared with PoW, VDF inherits its "workload" effect and resolves the power of electricity. So, the past decade has been a decade of paxos and pow, and the next decade will be a decade of VDF. Therefore, I propose to set up the VDF Chinese community to build a hardware and software ecosystem, research together, and explore new application areas.

VDF Chinese Community: https://github.com/taoshengshi/research/blob/master/vdf.md

References in this article:

Progress in Research on Paxos Consensus-Based Algorithms

Review of the Byzantine System Technology Research

"CAP Theory and Distributed System Design"

"Overview of the Consensus Mechanism of Encrypted Digital Currency System"

Analysis and Optimization of Game Dilemma in PoW Consensus Algorithm

"The Academic Pedigree of Bitcoin"

"VDF is not a proof of workload"

"I understand the verifiable delay function VDF"

"Development Status and Prospects of Blockchain Consensus Algorithm"

The Part-Time Parliament

Proof of Replication using Depth Robust Graphs – BPASE '18》

"Verifiable Delay Functions – Crypto 2018"

Time is fantasy

  " For us who believe in physics, no matter how long the time is, the difference between the past, the present and the future is only a continual illusion. – Albert Einstein "

“The most valuable startups of the past 10 years have not raised funds, no employees, no caps, so that anyone can invest.” — Both Bitcoin and Uber were launched around 2009. Today, Bitcoin's market capitalization is about $138 billion; Uber's valuation is about $75 billion. Where will they go from here?

Source of this article: Bu Tian Stone (WeChat public number: butianys)

Author: tshi

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Blockchain

Nuggets and Avalanches: A Tale of Crypto Adventures

Polygon's zkEVM is the reason why NFT sales have been able to remain strong despite recent slumps.

Market

ZetaChain has released the Mainnet Beta, allowing users to begin claiming ZETA tokens.

Exciting news from ZetaChain as they release ZetaHub, a user-friendly platform that allows for easy and interactive p...

Blockchain

Altcoin Surge: KLAY, CHZ, and BLUR Defy Bitcoin Downtrend 🚀📈

Despite the current trend, altcoins such as Chiliz, Klaytn, and Blur are bucking the trend and demonstrating strong m...

NFT

Trump Sells Off Ethereum: Is He Divesting or Pivoting to Solana?

During the past three weeks, former US President Donald Trump has successfully sold Ethereum (ETH) for $2.4 million, ...

Blockchain

CoinList Unveils cForm Empowering Crypto Community Engagement Revolution

Fashion-forward platform CoinList is shaking up the crypto world by launching cForm, a dynamic tool for cultivating a...

Blockchain

Terraform Labs CEO Arrested and Ruled Against in Lawsuit: The Collapse of the Blockchain

On Thursday, a US judge issued a ruling against Terraform Labs and its CEO Do Kwon for violating federal securities l...