Vitalik: Understanding the new universal zero-knowledge proof scheme PLONK

Special thanks to Justin Drake, Karl Floersch, Xiao Wei Wang, Barry Whitehat, Dankrad Feist, and Zac Williamson for their review.

Note: Original author Eitafang co-founder Vitalik Buterin

The following is the translation:

Recently, Ariel Gabizon, Zac Williamson, and Oana Ciobotaru announced a new universal zero-knowledge proof scheme, PLONK , whose full name is awkward "Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge." Although (researchers) have been researching improvements to the universal zero-knowledge proof protocol for many years, PLONK (and earlier but more complex SONIC and more recent Marlin ) have brought a series of improvements that may be overall Greatly improve the usability and progress of such certificates.

First improvement: Although PLONK still requires a trusted setup process like Zcash SNARKs, it is a "universal and updatable" trusted setup. This means two things: First, you don't need to set a separate trusted setting for every program you want to prove, but instead set a separate trusted setting for the entire solution, after which you can use the program with any program. Use together (the maximum size can be selected when making settings). Second, there is a way for multiple parties to participate in trusted settings, so that if any one of them is honest, then this trusted setting is safe, and this multi-party process is completely continuous: first, the first person participates, Then there is the second person, then the third… Participants don't even need to know in advance that new participants can add themselves to the end. This makes it easy to have a large number of participants with trusted settings, so it's safe to make sure the settings are in practice.

The second improvement is that the "fancy cryptography" it relies on is a single standardized component called "polynomial commitment." PLONK uses "Kate commitments" based on trusted settings and elliptic curve pairs, but you can also replace it with other schemes, such as FRI (which will turn PLONK into a STARK ) or DARK (based on hidden order groups) ). This means that the solution is theoretically compatible with any (achievable) trade-off between proof of size and security assumptions.

P1

This means that you need to use different trade-offs between proof of size and security assumptions (or developers with different ideas on this issue) and still be able to share most of the same tools for "arithmeticization" (convert a program) The process of forming a set of polynomial equations is then tested with a polynomial promise). If this approach is widely adopted, we can expect rapid progress in improving shared arithmetic techniques.

How does PLONK work?

Let's start by explaining how PLONK works. We only focus on polynomial equations and don't immediately explain how to verify them. A key component of PLONK , like the QAP used in SNARKs , is a conversion process in the form of "Give me a value X, I give you a specific program P, so when X is used as input for calculations , give some concrete results Y," put the problem "give me a set of values ​​that satisfy a set of mathematical equations". Program p can represent a lot of things, for example, the problem might be "Give me a Sudoku solution", you can set P to Sudoku validator plus some initial values ​​of the code and set Y to 1 (ie "Yes" , this solution is correct") to encode it, a satisfactory input X will be an effective solution for Sudoku. This is done by representing P as an addition and multiplication circuit with logic gates and converting it into a system of equations, where the variables are values ​​on all lines and each gate has an equation (for example, multiplication is x6 = x4 * x7 , addition is x8 = x5 + x9 ).

Here is an example of a x question, such that P(x) = x**3 + x + 5 = 35 (hint: x = 3 ):

P2

We label the doors and lines as follows:

P3

On the gate and line, we have two types of constraints: the door constraint (the equation that connects to the line between the same gate, such as a1 * b1 = c1 ) and the copy constraint (a statement about the equality of different lines anywhere in the circuit, For example, a0 = a1 = b1 = b2 = a3 or c0 = a1 ). We need to create a structured system of equations that will eventually be reduced to a very small number of polynomial equations to represent the two equations.

In PLONK, the settings and forms of these equations are as follows (where L = left, R = right, O = output, M = multiplication, C = constant):

P4

Each Q value is a constant, and the constants (and the number of equations) in each equation are different for each program. Each lowercase letter value is a variable provided by the user: ai is the left input line of the ith gate, bi is the right input line, and ci is the output line of the ith gate. For the addition gate, we set:

P5

Insert these constants into the equation and simplify them to get ai+bi-oi=0 , which is exactly what we want. For the multiplication gate, we set:

P6

For a constant gate that sets ai to a constant x , we set:

P7

You may have noticed that each end of the line, and each line in a set of lines, obviously must have the same value (for example, x) corresponding to a different variable; so far, nothing can force the output of a gate The input to the other gate is the same (we call it the "copy constraint"). PLONK certainly has a way to enforce replication constraints, which we will discuss later. So now we have a problem, the prover wants to prove that they have a bunch of Xai, Xbi and Xci values ​​that satisfy a bunch of equations of the same form. This is still a big problem, but unlike "find a satisfying input to this computer program," this is a very structured big problem, and we have math tools that can be used to "compress" it.

From linear systems to polynomials

If you know about STARKs or QAPs , the mechanism described in the next section will make you feel familiar, but if you don't, it doesn't matter. The main content here is to understand polynomial as a mathematical tool for encapsulating large numbers of values ​​into a single object. Usually, we look at the polynomial in the form of "coefficients", which is the following expression:

P8

But we can also use the "valued form" to look at polynomials. For example, we can think of the above as a polynomial of "degrees <4" with values (-2,1,0,1) (0,1,2,3) at coordinates (0,1,2,3) .

P9

Here is the next step. Many equations of the same form can be reinterpreted as a system of equations on a polynomial. For example, suppose we have a system:

P10

We define four polynomials in fixed form: L(x) is a polynomial with a degree < 3 at coordinates (0, 1, 2) and (2,1,8) 3 (0, 1, 2) . In the same coordinate system, M (x) is (-1, 4, -1) , R(w) is (3, -5, -1) and O(x) is (8, 5, -2) This method is straightforward to define a polynomial, which can be converted to a coefficient form using Lagrangian interpolation. Now consider the following equation:

P11

Here, Z(x) is short for (x-0) * (x-1) * (x-2) , which is the minimum (non-zero) that returns zero on the fixed-valued field (0, 1, 2) ) Polynomial. The solution of this equation (x1 = 1, x2 = 6, x3 = 4, H(x) = 0) is also the solution of the original equations, except that the original equations do not require H(x). Also note that in this case, H(x) is conveniently zero, but in more complicated cases, H may need to be non-zero.

So now we know that we can represent a large set of constraints in a few mathematical objects (polynomial). But in the equation we built above that represents the constraint of the gate line, the x1, x2, x3 variables are different in each equation. We can use the same method to make the variable itself a polynomial rather than a constant to deal with this problem. So we get:

P12

As mentioned earlier, each Q polynomial is a parameter generated by the program being validated, and the a , b , c polynomials are user-supplied inputs.

Copy constraints

Now let's go back to the "Connect" line. So far, what we have is a set of disjoint equations for disjoint values ​​that are independent and easy to satisfy: constant gates can be satisfied by setting the value to a constant, and addition and multiplication gates can be set by setting all lines to Zero to meet! In order to make the problem practically challenging (and actually represent the problem of coding in the original circuit), we need to add a verification "copy constraint" (eg a(5) = c(7), c(10) = c(12) Equal constraint equations, which require some clever tricks.

Our strategy is to design a "coordinate pair accumulator". A polynomial p(x) works as follows: First, let the two polynomials X(x) and Y(x) represent the x and y coordinates of a set of points (eg Represents a set ((0, -2), (1, 1), (2, 0), (3, 1)) , you can set X(x) = x and Y(x) = x^3 – 5x^ 2 + 7x – 2). Our goal is to have p(x) represent all points up to (but not including) the given position, so p(0) starts at 1, p(1) only represents the first point, and p(2) is the first One point and the second point, and so on. We will select two constants v1 and v2 by "random" and use the constraints p(0) = 1 and p(x+1) = p(x) * (v1 + X(x) + v2 * Y(x)) Construct p(x) ,至少在域(0,1,2,3)内。例如,让v1=3v2=2 ,我们得到: ,至少在域(0,1,2,3)内。例如,让v1=3v2=2 ,我们得到:

P13

P14

Note (except the first column) that each p(x) value is equal to the value to the left of it multiplied by the value above it.

The result we care about is p(4) = -240 . Now, considering this situation, we set X(x) = 2⁄3 x^3 – 4x^2 + 19⁄3 x (that is, at coordinates (0,1,2,3) , the value is (0,3,2,1) polynomial) instead of X(x) = x.

If you run the same process, you will find that you will also get p(4) = -240 .

This is no coincidence (in fact, if you randomly choose v1 and v2 from a large enough domain, it will almost never coincide). Instead, this is because Y(1) = Y(3) , so if you "swap" the X coordinates of points (1, 1) and (3,1), you won't change the set of points, and because of the accumulator Encode the collection (because multiplication does not care about the order), so the final value will be the same.

Now we can begin to understand the basic techniques we will use to prove replication constraints. First, consider a simple example, we just want to prove the copy constraints in a set of lines (for example, we want to prove a(1)=a(3) ). We will generate two coordinate accumulators: one is X(x) = x and Y(x) = a(x), the other is Y(x) = a(x), and X'(x) is for each The flipping of values ​​in a copy constraint (or rearrange) the polynomial of the set value. In the case of a(1) = a(3) , this means that the alignment will start at 0 3 2 1 4…. The first accumulator will compress ((0, a(0)), (1, a(1)), (2, a(2)), (3, a(3)), (4, a(4)) …, the second compression ((0,A(0)),(3,A(1)),(2,A(2)),(1,A(3)),(4,A(4)) … only when a(1) = a(3) , Both can give the same result.

To prove the constraint between a , b and c , we used the same process, but we "accumulate" the points of all three polynomials together. We assign some X coordinates to a, b, c (for example, a is Xa(x) = x is 0...n-1 , b is Xb(x) = n+x, ie n…2n-1 , c is Xc(x) = 2n+x, ie 2n…3n-1 . To prove the copy constraint that jumps between different sets of lines, the "replace" X coordinate will be the permutation of all three sets. For example, if we want Prove a(2)=b(4) with n = 5 , then X'a(x) will have a fixed value of 0 1 9 3 4 and X'b(x) will have a fixed value of 5 6 7 8 2 (Note 2 And 9 flips, where 9 corresponds to the b4 line.) Then, we will not check the equality in the process as before (ie check p(4) = p'(4)), but check each side three times differently The product of the run: P15

The product of the three p(n) settings on both sides accumulates all coordinate pairs in a , b and c , so this allows us to check as before, but now we can not only check three sets of lines A copy constraint between locations within a group of A, B, or C, and a copy constraint between one set of lines and another set of lines (eg, in a(2) = b(4) ).

And that's all!

Put everything together

In fact, all of these mathematical operations are not done on integers, but on the prime field; check the "Modular Math Interlude" section here to see what the prime field is. In addition, for mathematical reasons, it is best to use the Fast Fourier Transform (FFT) implementation to read and understand this article, instead of using x=0....n-1 represent the line index, we will use Ω: a power of 1,ω,ω^2….ω^n-1, where ω is a higher-order unit root in the domain. This has nothing to do with mathematics, except that the coordinate-to-accumulator constraint check equation is changed from p(x + 1) = p(x) * (v1 + X(x) + v2 * Y(x)) to p(ω * x) = p(x) * (v1 + X(x) + v2 * Y(x)) instead of using 0..n-1, n..2n-1, 2n..3n-1 as coordinates, we use ω ^i,g * ω^i, where g can be some random high-order element in the domain.

Now let's write all the equations we need to check. First, the main door constraint satisfaction check:

P16

Then there is the polynomial accumulator conversion constraint:

P17

Then the polynomial accumulator starts and ends the constraint:

P18 The polynomial provided by the user is:

  1. Line assignment a(x), b(x), c(x);
  2. Coordinate accumulators Pa(x), Pb(x), Pc(x), Pa'(x), Pb'(x), Pc'(x);
  3. Quotient H(x) and H1(x)…H6(x);

The program-specific polynomial that the prover and the verifier need to calculate in advance is:

  1. QL(x), QR(x), QO(x), QM(x), QC(x), which together represent the gates in the circuit (note that QC(x) encodes the common input, so it may be necessary to do it at runtime Calculate or modify);
  2. "arrange polynomials" σa(x), σb(x) and σc(x), which encode the copy constraints between the a, b and c lines;

Note that the verifier only needs to store the commitments of these polynomials. The only polynomial in the above equation is Z(x) = (x – 1) * (x – ω) * … * (x – ω^(n-1)) , which is designed to be calculated at all these points as zero. Fortunately, you can choose ω to make this polynomial easy to calculate: the usual method is to choose ω to satisfy ω^n=1, in which case Z(x) = x^n – 1.

The only restriction on v1 and v2 is that after v1 and v2 are known, the user cannot choose a(x), b(x) or c(x), so we can calculate a(x), b(x) and The promise of c(x) hashes v1 and v2 to satisfy this requirement. .

So now we have made the program satisfy the problem and become a simple problem of satisfying several equations with polynomials. There are some optimizations in PLONK, which can allow us to remove many polynomials from the above equations. For the sake of simplicity, I will not discuss these. . But the polynomial itself (whether it's program-specific parameters or user input) is large.

So the next question is, how do we get around this problem in order to make the proof shorter?

Polynomial commitment

Polynomial commitment is a short object that "represents" a polynomial and allows you to verify the calculation of the polynomial without actually including all the data in the polynomial. That is, if someone gives you a promise c that represents P(x), they can give you a proof and then convince you what the value of a particular z, P(z) is. There is a further mathematical result that, on a sufficiently large field, if certain types of equations on polynomials fixed at random z (selected before z is known) are true, then these same equations It is also true for the entire polynomial. For example, if P(z) * Q(z) + R(z) = S(z) + 5 , then we know that P(x) * Q(x) + R(x) = S(x) + 5 usually It is very possible. Using such a polynomial promise, we can easily check all of the polynomial equations above. Make a commitment, use them as input to generate z , prove what the values ​​of each polynomial are on z, and then use these settings to run the equation instead of the original polynomial. How do these promises work?

There are two parts: the promise of the polynomial P(x) -> c , and the opening of the value P(z) at some z . To make a commitment, there are many techniques, one is FRI and the other is Kate Commitment, which I will describe below. To prove an opening, there is a simple general "deduction" technique: in order to prove P(z) = a , you have to prove

P19

It is also a polynomial (using another polynomial promise). This is because if the quotient is a polynomial (ie it is not a fraction), then x - z is a factor of P(x) - a , so (P(x) - a)(z) = 0 , so P(z) = a . Try with some polynomial, such as P(x) = x^3 + 2*x^2+5 (z=6, a=293), then try (z = 6, a = 292) and see if it is How to fail (if you are lazy, you can look at the results given by WolframAlpha: (1) , (2) )

Also note a general optimization: in order to prove multiple openings of multiple polynomials at the same time, after submitting the output, perform a subtractive technique on the random linear combination of polynomial and output.

So how does the commitment itself work? Fortunately, Kate promises to be much simpler than FRI. The trusted setup process generates a set of elliptic curve points G, G * s, G * s^2 …. G * s^n, and G2 * s, where G and G2 are generators of two elliptic curve groups, and s It is a secret that will be forgotten once the program is completed (note that there are multiple versions of this setting, it is safe, as long as at least one participant has forgotten the secret they shared). These points will be published and considered to be the “proof key” of the program, and anyone who needs to make a polynomial commitment will need to use these points. A commitment to the d-order polynomial is made by multiplying each of the first d+1 points in the proof key by the corresponding coefficient in the polynomial and adding the results.

Note that this provides a "fixed value" for the polynomial at s , without knowing s . For example, x^3 + 2x^2+5 will be represented by (G * s^3) + 2 * (G * s^2) + 5 * G. We can use the symbol [P] to denote the G * P(s) encoded in this way (ie G * P(s) ). When doing subtraction techniques, you can use elliptic curve pairs to prove that the two polynomials actually satisfy the relationship: check e([P] - G * a, G2) = e([Q], [x] - G2 * z) it a proxy for checking P(x) - a = Q(x) * (x - z) .

But other types of polynomial commitments have emerged recently. A new scheme called DARK ("Diophantine Knowledge Argument") uses "hidden groups" such as class groups to implement another polynomial commitment. Hidden order groups are unique because they allow you to compress arbitrarily large numbers into group elements and even compress numbers that are much larger than group elements so that they are not "spoofed". From the VDF to the accumulator, from the proof of scope to the construction of the polynomial promise, it can be built on this basis. Another option is to use bulletproofs, which use a regular elliptic curve set at the cost of much longer verification time. Because polynomial commitments are much simpler than complete zero-knowledge proof schemes, we can expect more such schemes to be created in the future.

Generalize

Finally, let's discuss this scenario, give a program P, convert it to a circuit, and generate a set of equations as shown below:

P20

Then convert this set of equations into a polynomial equation:

P21

You can also generate a list of replication constraints from the loop. From these replication constraints, three polynomials representing the alignment line indices are generated: σa(x), σb(x), σc(x). To generate a proof, you need to calculate the values ​​of all the lines and convert them into three polynomials: a(x), b(x), c(x). As part of the replacement check parameters, you can also calculate six Coordinate Pair Accumulator polynomials. Finally, the cofactor Hi(x) is calculated.

There is a set of equations between polynomials that need to be checked. You can make a commitment to polynomial, open them at some random z (and prove that these openings are correct), and run the equations on these evaluation results instead of the original polynomial. Run the equation on it to get the job done. The proof itself is just a few promises and opening. It can be checked with a few equations. That's it.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Blockchain

The real life of the owner of the exchange: the horror of the thief, the night can not linger

Xu Mingxing once dreamed that someone had kidnapped him and asked him to hand over Bitcoin. Awakened in his dream, he...

Blockchain

"Black Horse" Exchange FTX Receives Liquid Value to Participate in Round B Investment, The Next Crypto Unicorn Is Coming Soon

The Hong Kong-based FTX exchange appears to be the latest unicorn cryptocurrency company with a valuation of $ 1 bill...

Blockchain

FTX on the Brink of Bankruptcy: Decisions Await!

Fashionista, get the scoop on FTX's post-bankruptcy plans as they weigh options for a potential sale or partnership.

Blockchain

Alameda Research's $200M Blow A Tale of Phishing Attacks and Poor Security Practices, as Confessed by Ex-FTX Engineer

Aditya Baradwaj has revealed that a file containing blockchain private keys and exchange API keys was left vulnerable...

Market

FTX Latest Debt and Asset Summary How much money is owed and how much debt can be repaid?

Wu said the author | Cat Brother this issue editor | According to the latest court documents on September 10th, as of...

Market

Exploring the evolution of the stablecoin market structure: Why can USDT always dominate the first place?

Stablecoin competition is an endless topic, as the industry struggles in its second decade, hoping that the market ca...