Zero Knowledge Proving Exploration | King Arthur's "Random" Challenge: From Interaction to Non-Interactive Zero Knowledge Proof

Author: Yu Guo

Source: Ambi Lab

 

"Challenges are at times an indication of Lord's trust in you." Challenges, sometimes a manifestation of God trusting you.

D. Todd Christofferson This article continues the long-term discussion of the mechanism behind the zero-knowledge proof, and hopes to help you understand the general outline of this type of "modern cryptography tool." This article is about 8000 words, a small amount of mathematical formula.

 

Interaction and challenge

The zero-knowledge proof system we introduced earlier is "interactive", requiring the verifier Bob to provide one or several "random numbers" in the interaction to challenge, such as "map three-staining problem" (see "Series 2") The verifier Bob needs to "continuously" randomly pick an edge to challenge Alice's answer until Bob is satisfied, and Alice's cheating probability decays "exponentially." And let Bob believe that the "foundation" of the proof depends on whether the random number chosen by Bob is sufficiently random. If Alice can predict Bob's random number in advance, the disaster will happen, the real world will degenerate into an "ideal world", and Alice can immediately upgrade to "simulator" to fool Bob with super power.

In "Series 3", we analyzed the Schnorr protocol. Although the prover Bob only needs to pick a random number c to challenge Alice and let her calculate a value of z , Bob must not allow Alice to predict c . Any knowledge, otherwise, Alice will also become a simulator.

The importance of random numbers is self-evident:

The challenge of random numbers is the "trust foundation" of interactive zero-knowledge proof.

However, the "interaction process" limits the application scenario. If you can turn interactive zero-knowledge proof into "non-interactive"? This will be very, very exciting. The so-called non-interaction can be seen as a "one round" proof process, that is, Alice sends a certificate directly to Bob for verification.

Non-interactive zero-knowledge proof, English is Non-Interactive Zero Knowledge , referred to as NIZK. It means that the entire certificate is encoded as a "string", which can be written on a piece of paper and sent to any verifier by mail, chat tool, etc. The string can even be placed on Github for download. verification.

In the blockchain world, "NIZK" can be part of a consensus agreement. Because a transaction requires multiple miners to verify . Imagine that if the sender of the transaction and each miner have to interact and let the miners challenge, then the consensus process will be extremely slow. Non-interactive zero-knowledge proofs can be broadcast directly to all miners and let them verify themselves.

Some friends may ask: Is it enough to have only one miner challenge? Encoding the interaction scripts of miners and traders into proofs, then broadcasting them to other miners, and then other miners directly believe that the challenge process is credible, can't it be? However, it is clear that there is a need to believe that the first interactive mine work is a trusted third party, a third party? It doesn't seem to be a good idea…

Instead of interactive zero-knowledge proof, the following we directly say "NIZK", it seems to be ideal, no third party makes a difference.

Confusion caused by "non-interaction"

Non-interactive zero-knowledge proof, NIZK, if it exists, is much more powerful than interactive proof.

  • Interactive proof can only be trusted by one verifier; NIZK can trust multiple certifiers and even everyone.
  • Interactive proof can only be valid at the moment of interaction; NIZK will always be valid.

NIZK not only spans space, but also spans time

It sounds beautiful, isn't it? But, ……

Repeat one of the conclusions in the previous section:

The challenge of random numbers is the "trust foundation" of interactive zero-knowledge proof.

But what if the NIZK loses the challenge process?

We have recalled the proof of the nature of "zero knowledge" (see "Series 2"), which proves that the process needs to construct a simulator (algorithm) that also interacts with the prover (Bob) in the ideal world, while the prover Bob does not. The ability to distinguish whether the other party is really Alice is still a simulator.

If you are considering non-interactive in NIZK now , if "I" presents a piece of paper to "you", it says "true" proof X , and if "you" really believe me after reading this paper. And because the agreement is "zero knowledge", if you replace "I" with a simulator, the simulator can also "forge" a false certificate Y , and can also believe "you".

Ok, the problem is coming:

  • How do you distinguish between X and Y ? Of course you can't distinguish because the protocol is zero-knowledge, you must not be able to distinguish
  • I can show Y to you as well. Can you deceive you if you are not "I"?

Is it not harmonious? Please think about it for two minutes here.

(Two minutes later…)

Because NIZK has no interaction, there is no challenge process. All the proof process has Alice to calculate the writing. In theory, Alice really writes what he wants to write, and no one can stop it. For example, Alice writes "Ideal World". False proof Y.

I must have a deep understanding of the simulator's friends. Here I will find a key point: the simulator must only construct Y in the "ideal world" , that is, Y such evil things can only exist in the "ideal world", not to The "real world" is a curse to the human world.

Keep thinking…

There is also a deeper problem. Please recall the "map three-staining problem". The reason why the simulator can't be used in the "real world" is that the core reason is that he has the "time back" superpower in the ideal world. This black magic does not exist in the "real world." The "non-existence" of the real world is the key.

Moreover, there is no interaction in NIZK , which has a serious consequence. The simulator has no way to use the super-capability of "time-reverse". Of course, it seems that it cannot distinguish the behavior of the prover in the two worlds.

In other words, if we face any NIZK system, it seems that the "simulator" is difficult to stand up. It seems to be only a drop in the world, becoming an ordinary mortal. If, I say, if, by this inference, assuming that the simulator is no longer capable, that means Alice and the simulator are no different, Alice can also be a simulator, and then continue to infer that Alice can be in the "real world." If you arbitrarily spoof Bob, then this proof system is no longer valuable because it loses "reliability." Conclusion: Any NIZK is not reliable.

This must be a problem…

In the above analysis, we mentioned the lack of interactive challenges. Indeed, if Bob does not participate in Alice's process of producing proof, it proves that every bit contained is provided by Alice, and it seems that "proof" itself does not have any "root" that Bob trusts. This does not seem to make sense from the "intuition."

Does that mean that without Bob's participation, there is no way to establish a "trust base"? Where can the roots of trust come from?

The answer is "third party"!

Wait …, is the protocol interaction not only two parties? Alice and Bob, where are the third parties?

There is a need to introduce third parties in a special way, and there is more than one way. Let's study the first one first.

(Tears: Not to say good, don't we introduce third parties?)

Review the Schnorr protocol

Let's take a look at the old friend, the Schnorr protocol, which is a three-step protocol: the first step, Alice sends a promise, then the second step Bob sends a random number challenge, and the third step, Alice responds to the challenge.

Let's see how to turn a three-step Schnorr protocol into one step.

Looking at the second step of the Schnorr protocol, Bob needs to give a random challenge number c. Here we can let Alice use the following formula to calculate the number of challenges, thus eliminating the second step of the protocol.

  c = Hash(PK, R) 

Where R is the elliptic curve point that Alice sends to Bob, and PK is the public key. You can take a good look at this formula that uses the Hash algorithm to calculate c. This formula achieves two purposes:

  1. Alice has no way to predict c until the promise R is generated, even if c is eventually disguised by Alice.
  2. c Calculated by the Hash function, it will be evenly distributed in an integer field, and can be used as a random number ( Note: Please understand this for the time being, we will discuss it in depth later )

Please note: Alice must never predict c before R is generated. Otherwise, Alice is equal to the super-capability of "time-reverse" in disguise, so that Bob can be fooled at will.

A cryptographically secure hash function is "one-way", such as SHA256, SHA3, blake2, and so on. In this way, although c is calculated by Alice, Alice does not have the ability to cheat by picking c. Because as long as Alice produces R, c is equivalent to fixing. We assume that Alice, the mortal, has no ability to reversely calculate Hash in the "real world."

Looking at the picture above, we use the Hash function to merge the three-step Schnorr protocol into one step. Alice can send: (R, c, z) directly. And because Bob has a PK, Bob can calculate c by itself, so Alice can only send (R, z).

We have slightly changed the above scheme and got the "digital signature" scheme. The so-called digital signature means that "I" presents a string to "you", such as "Day in the mountains, the Yellow River into the sea", and then in order to prove that this poem is what I showed, I need to sign something. This thing can prove that my identity is related to this poem.

Digital signature from NIZK perspective

Not strictly speaking, the digital signature scheme is equivalent to proving that (1) I have a private key, and (2) the private key and the message are correlated.

I have to prove my identity first, then this is simple, this is the function of the Schnorr protocol, which can prove to the other party that "I have a private key". And this proof process is zero-knowledge: no knowledge of the "private key" is revealed.

So how do you relate to this Tang poem? We modify the process of calculating c:

  m = "The day is full of mountains, the Yellow River flows into the sea"
 c = Hash(m, R) 

Here, in order to ensure that the attacker can not forge the signature at random, it is the use of the discrete logarithm problem (DLP) and the hash function to satisfy the hypothesis of the second pre-image (Secondary Preimage Resistance).

Note: Strictly speaking, in order to ensure the unforgeability of digital signatures, it is necessary to prove that the Schnorr protocol satisfies the stronger nature of "Simulation Soundness". Please refer to the literature [2] for this.

The picture above is a well-known digital signature scheme – Schnorr signature scheme [1]. There is also an optimization here, the content that Alice sends to Bob is not (R, z) but (c, z), because R can be calculated by c, z.

Note: Why is this an "optimization"? At present, the attack methods for elliptic curves are Shanks algorithm, Lambda algorithm and Pollard's rho algorithm. Please remember that their algorithm complexity is about [3], and n is the number of bits in the finite field size. Suppose we use a finite field that is very close to 2^256, that is, z is 256bit, then the size of the elliptic curve group is almost close to 256bit, so that the square root of 2^256 is 2^128, so The security of a 256-bit elliptic curve group is only 128 bits. Then, the challenge number c only needs 128 bits. This way Alice sends c more space-saving than sending R, which requires at least 256 bits. The two values ​​c and z add up to a total of 384 bits. Saves a quarter of the valuable space compared to the popular ECDSA signature scheme. Now the Bitcoin development team is ready to change the ECDSA signature scheme to a Schnorr-like signature scheme, muSig[4], which allows for more flexible support for multi-sign and aggregation.

The use of the Hash function to turn an interactive proof system into a non-interactive method is called the Fiat-Shamir transform [5], which was proposed in 1986 by cryptographers Amos Fiat and Adi Shamir.

Rebuilding Trust – Random Prophecy

As mentioned above, losing the challenge seems to have lost the "trust foundation" of the proof. In the Schnorr signature scheme, the Hash function takes on the role of "challenger", which has a very academic name: "Random Oracle" [6].

But why use Hash here? In fact, when Alice wants to generate a public random number, she needs a thing called a "random oracle". What is it?

Time to open the brain!

We envision that in the "real world," there is a "elf" in the sky. He holds a two-column table with a string on the left and a number on the right. Anyone, including you and me, including Alice and Bob, can send a string to the "genie."

After the sprite gets the string, it will look up the left side of the table to see if there is any string in the table. There are two cases below:

  • Case 1: If the left column can't find the string, the sprite will generate a "true random number", then write the string and the random number to the table, and then return the random number to the mortal on the ground.
  • Case 2: If the left column has this string record, the sprite will return the number in the right column directly to the ground.

You will find that this sprite behaves much like a random number generator, but it is very different. The difference is that when we send the same string, it will return the same number. This elf is the legendary "random predictor."

In the process of merging the Schnorr protocol, what we need is a random prophecy sprite instead of a hash function. What is the difference between the two? The difference is:

  • The random oracle predicts a "true" random number with a consistent distribution for each new string.
  • The result of the Hash function calculation is not a random number with a truly consistent distribution.

So why is the Hash function used before? This is because in the real world, the real random oracle does not exist! why? In fact, a hash function is unlikely to produce a true random number because the hash function is a "deterministic" algorithm, and no other random quantities are introduced except for the parameters.

A Hash function with cryptographic security strength "seems" can act as a "pseudo" random oracle. Then the combined security protocol requires an additional strong security assumption, which is:

Hypothesis: A cryptographically secure Hash function can approximate the legendary "random oracle"

Because this assumption cannot be proved, we can only trust this hypothesis, or use it as an axiom. In a sentence, the generalized anti-collision properties of the Hash function determine that its output can simulate random numbers, and in many cases (not all), it is very difficult to attack Hash functions, so many cryptographers are boldly used.

A security model that does not use this assumption is called a "standard model", and a security model that uses this assumption cannot of course be called a "non-standard model". It has a nice proper noun called a "random prediction model."

There are two different types of people in the world, like sweet bean curd, do not like sweet bean curd. Similarly, there are two types of cryptographers in the world, like random prediction models, and do not like random prediction models [6].

Constructing the foundation – the kidnapped elf

The Schnorr protocol has the NIZK property after Fiat-Shamir transformation. This is different from the SHVZK we have proven, SHVZK requires the verifier to be honest, and NIZK no longer has any unrealistic requirements for the verifier, because the verifier does not participate in the interaction, the so-called honest verifier is no longer a problem.

Note: What if the verifier Bob is dishonest? So Bob has the possibility to extract Alice's knowledge. But for the three-step Schnorr protocol, whether it satisfies "zero knowledge" is still in an unknown state. We only proved in series three that it satisfies a relatively weak property: SHVZK .

However, when the Schnorr protocol became a non-interactive zero-knowledge proof system, it was truly "zero knowledge."

However, maybe your question is coming. This argument sounds reasonable. Can you prove it?

Time is up, "Cuihua, on the simulator"

How to construct an "ideal world" with the simulator Dafa? Let's think about it. We have used "time reversal" before, and we have modified the "random number conveyor" super power to let the "simulator" cheat. But there is no interaction, which means that the "time back" super power can't be used; Bob's random number conveyor does not exist, and the "tampering conveyor" super power can't be used!

But the simulator always has some kind of "super power" to build the "root" of trust.

(If the simulator has the ability to cheat without super powers, it is equivalent to proving the unreliability of the protocol).

Maybe everyone has already guessed it, and the simulator has to be on the "random predictor".

First consider constructing an "ideal world" to prove "zero knowledge." In the ideal world, the simulator "kidnapped" the "elf" responsible for prophecy. When Bob asks the sprite for a random number, the sprite does not give a true random number, but instead gives Zlice (the Alice that the simulator pretends). A number prepared in advance (also conforming to the consistency distribution, ensuring indistinguishability), the "elf" can't help but return Bob a number that looks random but actually has a back door. The so-called back door, that is, this number is Zlice's own choice in advance .

The first step: Zlice randomly selects z, randomly selects c, and calculates R'=z*G – c*PK.

Step 2: Zlice writes c and (m, R') to the sprite's table.

Step 3: Zlice sends the signature (c, z) to Bob.

Step 4: Bob calculates R=z*G – c*PK and sends (m, R) to the sprite, and the sprite returns c'. Note that Rb calculated by Bob and R' calculated by Zlice are equal.

Step 5: Bob verifies c ?= c' to see if the random number sent back by the sprite is equal to the random number sent by the other party. If they are equal, the verification signature passes; otherwise, the verification fails.

By abducting the "elf", Zlice can also predict the random number in advance, which can achieve the same effect as time backflow.

We have already demonstrated the "existence" of the simulator Zlice, so we have already demonstrated NIZK.

Next we prove the "reliability" of this agreement. Imagine that in another "ideal world", a thing called "extractor" also kidnapped the elf. When the innocent Alice asks for a random number to the "elf", the "elf" returns a c1, and the "decimator" peeks into the c1 from the sprite's table. After Alice calculates z1, then the "decimator" is still You can launch the "Time Backflow" super power, let Alice go back to the second step, and then ask the "spy" for a random number. The string sent by Alice is obviously the same as the string sent the first time, (R, m) . By the way, since (R, m) is already written in the "left column" of the sprite sheet, an honest "elf" should return c1. However, the "decimator" kidnapped the sprite, and he changed the "right column" of the corresponding (R, m) line in the table to a different number c2. After Alice calculates another z2, the decimator completes the task and calculates Alice's private key sk by the following equation:

  Sk = (z1 - z2)/(c1 - c2) 

Fiat-Shamir transformation – from Public-Coin to NIZK

Not only for the Schnorr protocol, but for any "Public-Coin protocol", the Fiat-Shamir transform can be used to "compress" the entire protocol into a one-step interaction, which is a non-interactive proof system. Amos Fiat and Adi Shamir's paper "How to Prove Yourself: Practical Solutions to Identification and Signature Problems." was published at the Crypto Conference in 1986 [5]. There is also a saying that this technique comes from Manuel Blum [6].

Again, in the Public-coin protocol, the certifier Bob does only one type of thing, which is to generate a random number and then challenge Alice. Through the Fiat-Shamir transformation, each time Bob's "challenge behavior" can be replaced by a "random prediction".

In the specific implementation, random prediction requires a Hash function with cryptographic security strength (can not be chosen, must use cryptographically secure Hash), and the parameters of the Hash function should be all previous context input. Below is an example diagram where you can quickly understand how this Fiat-Shamir transform works.

As mentioned earlier, in a non-interactive proof system, a third party needs to be introduced to build the "root" of trust, so that Bob can fully trust the proof constructed by Alice. Here, the third party is the "elf", and the academic slang is "Random Oracle." This elf is not a real third party, but a virtual third party, which exists in both the "real world" and the "ideal world." In the "real world," the elf is a responsible, quiet and beautiful man, and in the "ideal world," it will be kidnapped by the "simulator."

The Public-Coin protocol also has a nice name, "Arthur-Merlin Game"…

Look at the picture above, the "white robe" on the left is Merlin (the magician Merlin), the handsome guy in the middle is King Arthur (King Arthur), two characters from the medieval European legend – King Arthur's round table knight.

Arthur is an impatient king, he carries a coin with him, and Merlin is a magical magician with unlimited computing power. Then the magician wants to convince the king that a certain "argument" is true, so the magician will do it with the king. To the conversation, but because the king is lazy, he only throws a coin at a time, then "challenges" the magician, and the magician needs to deal with it in time, and needs to let the king believe his own judgment after the k round. Because Merlin has magic, the coins thrown by King Arthur can be seen by Merlin [7].

This is somewhat similar to, but different from, the Interactive Proof System (IP) we mentioned in "Series One". IP was formally proposed by Goldwasser, Micali and Rackoff (GMR) in 1985, and its proving ability covers a large class of computational complexity issues. The difference is that in the definition of IP, the prover Prover and the verifier Verifier are both Turing machines that can flip coins. Verifier can secretly flip coins and hide them against Prover. In Arthur-Merlin games, the king can only Toss a coin, not only that, but the result of a coin flip is always known to Merlin.

However, the Fiat-Shamir transformation can only prove security under the "random prediction model", and the safety of the random prediction process with the Hash function is a lack of security proof. Not only that, the security protocol under the "random prediction model" may be insecure, and some people have found some counterexamples [8]; even more unfortunately, S. Goldwasser and Y. Tauman proved the Fiat-Shamir transformation in 2003. There is also a security counterexample [9]. But this does not mean that the Fiat-Shamir transformation can't be used, but it should be very careful during use and should not be applied blindly.

Despite this, people can't resist the temptation of the Fiat-Shamir transformation, which is extremely versatile. It is worth mentioning that the Fiat-Shamir transform is everywhere in the various schemes of zkSNARK, the hottest general non-interactive zero-knowledge proof. For example, Bulletproofs (bullet proofs) that you may be familiar with, as well as some common zero-knowledge proofs that are not so well-known, such as Hyrax, Ligero, Supersonic, Libra, etc. (we will follow the instructions one by one).

Caution: Security risks in Fiat-Shamir transformation

In the Fiat-Shamir transformation, pay special attention to the parameters fed to the Hash function. In the actual code implementation, there is such a case, missing some parameters of the Hash function:

For example, in A, Hash(A), B, Hash(B), the second hash function misses parameter A. The correct approach should be A, Hash(A), B, Hash(A,B). This type of practice introduces serious security vulnerabilities. For example, in the Swiss electronic voting system SwissPost-Scytl, the parameters that should exist in the implementation code of the Fiat-Shamir transformation are missed many times, resulting in the attacker not only You can arbitrarily invalidate the votes, and you can arbitrarily falsify the votes to achieve the purpose of fraud [10]. Therefore, please pay attention to the project implementation.

Careful readers may look back at Schnorr's signature, and you'll find that the Hash algorithm in Schnorr's signature also seems to miss a parameter PK, not a strict Fiat-Shamir transform, which is called the Weak Fiat-Shamir transform [11]. However, this special case does not have security issues [3], please do not arbitrarily imitate minors.

Recently, some scholars have begun to study how to strictly prove the security of Fiat-Shamir transform under the standard model. At present, they either introduce additional strong security hypotheses or prove for a specific protocol, but it seems that the progress is not great.

The power of interaction

In 1985, when the papers of the GMR trio were rejected many times, they were finally accepted by STOC'85. Another similar work was also accepted by STOC'85, which is László Babai from the University of Roland, Hungary. The paper "Arthur-Merlin Games: A Randomized Proof System, and a Hierarchy of Complexity Classes" by Shlomo Moran from the Israeli Institute of Technology [7] introduced the Public-coin interactive protocol (as the name suggests, Verifier only publicly flips coins) .

King Arthur's approach is simple, examining Merlin's assertions through repeated "random" challenges, which is in line with the intuition we've talked about earlier: using random challenges to build the "foundation" of trust. Babai proved an interesting conclusion in the paper: AM[k]=AM[2], where k represents the number of interactions, and the effect of multiple interactions is actually equivalent to two interactions. The so-called interaction twice means that Arthur sends a challenge number and Merlin responds.

Note: There is another type of problem belonging to MA. The order of interaction of this type of problem is different from AM. In MA, Merlin gives the proof first, then Arthur throws the coin. It has been proven that the problem that MA can handle is a subset of AM.

Not only that, but Babai also boldly guessed: AM[poly] is equivalent to IP. This is a magical assertion: the king is lazy, he only needs to throw a polynomial coin to successfully challenge the magician, and the expressive ability of this method is completely equivalent to the interactive proof system IP described by GMR. Sure enough, at the STOC'86 meeting, papers from S. Goldwasser and M. Sipser proved this, AM[poly] == IP[12].

This means that the repeated "random challenge" power is endless, and it is equivalent to any interactive proof system. But the relationship between AM[poly] and other computational complexity classes is the next research hotspot.

Three years later, at the end of November 1989, just three decades ago, young cryptographer Noam Nisan sent an e-mail to send his temporary academic conclusions to several cryptographers, and he ran to the south. America is on vacation. However, he never imagined that this email would set off a fierce academic competition in history, M. Blum, S. Kannan, D. Lipton, D. Beaver, J. Feigenbaum, H. Karloff, C. Lund, etc. The elite began to join the battle. They talked to each other day and night, and competed to publish their own research results. Finally, on December 26, just one month, Adi Shamir proved the following conclusion:

AM[poly] == IP == PSPACE

It explains the computational theory of the concept of "effective proof" and explains the computational power that the concept of "interactive proof system" can cover.

Note: The NP class is a subset of the PSPACE class. The former is familiar to everyone, and the latter is associated with the game or the winning strategy in chess [13].

And L. Babai wrote an article called "Email and the unexpected power of interaction" [14], detailing the entire month of "mail interaction." The splendid academic competition and the ins and outs of "interaction proof".

Public reference string – another "trust root"

In addition to the "random oracle", the non-interactive zero-knowledge proof system uses the "Common Reference String" (CRS) to complete the random challenge. It is a random string generated by a trusted third party before the prover Alice constructs the NIZK certificate. The CRS must be completed by a trusted third party and shared with Alice and the verifier Bob.

Yes, you are not mistaken, there is a "third party" here! Although the third party does not directly participate in the proof, he wants to ensure the credibility of the random string generation process. The process of generating CRS is also called "Trusted Setup", which is something that everyone loves and hates. Obviously, introducing a third party in a real-world scenario can be a headache. What is CRS used for? What is the letter from Trusted Setup? This part will be left to the next article in this series.

To be continued

Non-interactive zero-knowledge proof NIZK's "trust base" also requires some form of random "challenge", a "challenge" form is given to "random predictive elves"; another "challenge" is through both Alice and Bob Shared random strings are implemented. Both forms of challenge essentially introduce third parties, and both must provide a "back door" that allows the "simulator" to be used, so that the simulator has some "advantage" in the "ideal world". The advantages must be ineffective in the "real world."

NIZK exudes infinite charm, which reminds me from time to time that in the past thirty years, the pioneers have found the subtle conclusions, as well as so many unknown corners, waiting for the illumination of inspiration.

This series of articles received the first Pull Request from the project warehouse on Github, from Jingyu Hu (colortigerhu), only changed the word, but at that moment, I felt the vitality. Knowledge exchange, thought collision, very charming, isn't it?

Everyone has interacted with us as part of our own. Jodi Aman

* Acknowledgement: Special thanks to Ding Yuchao, Liu Yuran, Chen Yu for their professional advice and corrections, and thanks to the suggestions of Amby Labs's friends (p0n1, even, aphasiayc, Vawheter, yghu, mr).

Acknowledgement: The cryptography research initiated by Nisan is based on an article by Teacher Deng [15].

 

references

  • [1] Schnorr, Claus-Peter. "Efficient signature generation by smart cards." Journal of cryptology 4.3 (1991): 161-174.
  • [2] Paillier, Pascal, and Damien Vergnaud. "Discrete-log-based signatures may not be equivalent to discrete log." International Conference on the Theory and Application of Cryptology and Information Security . Springer, Berlin, Heidelberg, 2005.
  • [3] Pointcheval, David, and Jacques Stern. "Security arguments for digital signatures and blind signatures." Journal of cryptology 13.3 (2000): 361-396.
  • [4] Maxwell, Gregory, Andrew Poelstra, Yannick Seurin, and Pieter Wuille. "Simple schnorr multi-signatures with applications to bitcoin." Designs, Codes and Cryptography 87, no. 9 (2019): 2139-2164.
  • [5] Fiat, Amos, and Adi Shamir. "How to prove yourself: Practical solutions to identification and signature problems." Conference on the Theory and Application of Cryptographic Techniques. Springer, Berlin, Heidelberg, 1986.
  • [6] Bellare, Mihir, and Phillip Rogaway. "Random Oracles Are Practical: a Paradigm for Designing Efficient Protocols." Proc. of the 1st CCS (1995): 62-73.
  • [7] László Babai, and Shlomo Moran. "Arthur-Merlin games: a randomized proof system, and a hierarchy of complexity classes." Journal of Computer and System Sciences 36.2 (1988): 254-276.m
  • [8] Canetti, Ran, Oded Goldreich, and Shai Halevi. "The random oracle methodology, revisited." Journal of the ACM (JACM) 51.4 (2004): 557-594.
  • [9] Shafi Goldwasser, and Yael Tauman . "On the (in) security of the Fiat-Shamir paradigm." 44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings. . IEEE, 2003.
  • [10]Lewis, Sarah Jamie, Olivier Pereira, and Vanessa Teague. "Addendum to how not to prove your election outcome: The use of nonadaptive zero knowledge proofs in the ScytlSwissPost Internet voting system, and its implica tions for castasintended verifi cation." Univ. Melbourne, Parkville, Australia (2019).
  • [11] Bernhard, David, Olivier Pereira, and Bogdan Warinschi. "How not to prove yourself: Pitfalls of the fiat-shamir heuristic and applications to helios." International Conference on the Theory and Application of Cryptology and Information Security . Springer, Berlin , Heidelberg, 2012.
  • [12] Goldwasser, Shafi, and Michael Sipser. "Private coins versus public coins in interactive proof systems." Proceedings of the eighteenth annual ACM symposium on Theory of computing . ACM, 1986.
  • [13] Papadimitriou, Christos H. "Games against nature." Journal of Computer and System Sciences 31.2 (1985): 288-301.
  • [14] Babai, László. "E-mail and the unexpected power of interaction." Proceedings Fifth Annual Structure in Complexity Theory Conference . IEEE, 1990.
  • [15] Yi Deng. "Zero knowledge: a slightly serious science." https://zhuanlan.zhihu.com/p/29491567