Exploring Zero Knowledge Proof Series | Understanding Zero Knowledge from "Simulation": Parallel Universe and Time Back

I know that I know nothing – Socrates

I believe many people have heard of zero-knowledge proofs, but only a handful of people have heard of simulations, but simulation is the key to understanding zero-knowledge.

In the first article, "First Knowing Zero Knowledge" and "Proof" (link) [1], we introduced a simple zero-knowledge interaction system: map three-staining problem. So is this system really zero-knowledge? Why do we believe this conclusion? Is there proof? In the conversation between Alice and Bob, if there is no zero knowledge, Alice will be pitted. The designer of the interactive system "I" needs to convince Alice that this conversation is indeed zero-knowledge.

If you explain from an intuitionistic point of view that there is information leakage in an interactive system, then you only need to prove that the first few bits lead to information leakage; but if you want to prove that there is no information leakage, then you have to target all information flow. All the bits in the middle say that this number from the 1, 2, 3, 4, 5, … did not reveal any information. Look at the officials, is this difficult?

This article is about 8,000 words, slightly burning the brain.

Security definition and indistinguishability

First of all, an interactive system, that is, a dialogue, its "zero knowledge" needs to be proved. After all, modern cryptography is built on a rigorous formal system. Before you prove it, you need to know what the "safety assumptions" are. The so-called security assumptions, such as we say that a system's permission isolation is extremely accurate, each user can only see the authorized information, but this is based on a security assumption: the administrator account has not been cracked. For example, in mobile banking software, the transfer function can only be completed by SMS authentication code. This is also based on a security assumption: your mobile phone SIM card has not been cloned. If we delve into every system we feel safe, there are a lot of security assumptions that seem less solid. Is the Bitcoin private key secure? Bitcoin accounts have a lot of security assumptions: First, your mnemonic can't let others know that the private key storage encryption algorithm in the mobile wallet is strong enough, the key derivation algorithm is regular, you can't forget the mnemonic, and so on.

From the safety assumptions, safety is all about hooliganism. Everything is safe. Only after mathematical proof can you be sure that the security of this algorithm/solution is based on some very clear "safety assumptions".

Before the proof, there is still one thing missing, that is, "security definition." In most people's cognitive systems, security is a box, and everything can be loaded into it. Everyone should remind themselves that when talking about the word security, have you ever thought about what is safe? How is it safe?

"Security" needs to have a strict definition in mathematics

The great scientist Claude Shannon gave a very reliable definition of security from the perspective of information theory [2]:

Perfect security: Suppose you are an attacker. You can't get any valuable information through ciphertext. The only way to crack is to rely on Yimeng.

Think about it, this definition is very interesting, you can't get information through cipher text, which means you don't get any extra computing power, which can help you calculate the plaintext in less time.

But this definition is so perfect that it is difficult to satisfy this security definition with the encryption algorithms used. Later, Goldwasser and Micali and others wrote another classic "probability encryption" in the annals of history [2].

This concept is defined in this paper: Semantic Security. The so-called semantic security relaxes some requirements on the definition of perfect security.

Semantic Security: Suppose you are an attacker and you can't calculate any valuable information in polynomial time through ciphertext.

Ok, this looks more reliable. The next question is how to understand the concept of "cannot calculate information"? It seems that we need to measure information, what is the definition of information?

We introduce a concept – "indistinguishability" to re-express the security of the encryption algorithm: assuming you are an attacker, and I have an encryption algorithm:

  1. You randomly generate two paragraphs of the same length of plain text, m1 = "the day is full of mountains, the Yellow River flows into the sea", m2 = "hot hot, hot, hot and hot"
  2. You give me these two plaintexts, m1 and m2.
  3. I randomly pick a plaintext, don't tell you which one, and then encrypt it to produce a ciphertext c.
  4. I showed you the ciphertext c, letting you guess whether this c was generated by Tang poetry encryption or garbled encryption.
  5. If you use a computer to crack c and can't crack it in polynomial time, that is, you can't distinguish the source of c, then the encryption algorithm is semantically secure.

OK, after understanding "indistinguishability", we will return to "zero knowledge". How can we prove that an interactive system is "zero knowledge"? First we have to define the concept of zero knowledge.

Note: Indistinguishability is indistinguishable in the sense of probability; academically, it can be divided into "completely indistinguishable", "statistically indistinguishable", and "calculating indistinguishable". In this article, we don't need to understand the difference between these concepts for the time being.

Meet the simulator

First open a brain, imagine that in the parallel universe, there are two parallel worlds, one called "Ideal World" and the other called "Real World". Each of us can play happily in two parallel worlds, but ordinary people in the two worlds cannot perceive each other and communicate with each other.

Suppose "you" is a very powerful password cracker, and "you" is not an ordinary person, with the ability to shuttle between parallel universes. Alice has a three-stained answer to the map. Your goal is to get a three-stained answer to the map by talking to Alice. The session is based on the "Map Three-Staining Problem" protocol in the previous article.

Continuing the brain hole, Alice only exists in the "real world"; in the "ideal world", Alice is "replaced" into an individual with the same appearance and sound, we call it Zlice. Next, put "you" into both worlds at the same time, but don't let you know which world you are in. Your two avatars are faced with a "Alice" look.

Again, in the "real world," you are talking to a real, honest Alice; in the "ideal world," you are talking to Zlice (fake Alice), Zlice, although the face language and Alice Nothing, but the difference is that Zlice doesn't know "knowledge", that is, he doesn't know the answer to a three-stained question.

Next in both worlds, your two avatars will talk to true and false Alice at the same time. The magical thing happened. In the end, in both worlds, your two avatars were convinced. After n rounds of challenges, they did not find the other party cheating. The two avatars of "you" think that the other party really knows the "answer". . In other words, "you" does not have the ability to "distinguish" whether it is in the "real world" or "ideal world". Of course, there is no ability to "distinguish" whether it is Alice or Zlice. Not only that, but for me to eat melons, if I put "I" as an observer into any world, I will be "indistinguishable" from you. Is this person who looks like "Alice" in front of him really or not? false.

The following is the conclusion of burning brain:

Why is this interactive system "zero knowledge"? Because Zlice has no knowledge, and she is indistinguishable from Alice.

I will explain another way: because you and I have no way to distinguish which world we are in, the interaction process between the two worlds is almost indistinguishable, and there is no knowledge in one of the worlds. Therefore, we say this interaction. The agreement – "map three-staining problem" is "zero knowledge."

There is also a premise here that the ideal world must be algorithm configurable. Then, there is a "God" who uses the algorithm to "simulate" an "ideal world" in which an algorithm called Zlice is constructed. She does not have "knowledge" as input, that is, "zero knowledge"; in addition, " The ideal world is exactly the same as the "real world."

Imagine that during the conversation, if Alice reveals the information, then you can immediately distinguish whether the person in front is true Alice or Zlice, and it is impossible for Zlice to disguise the leaked information . So you can conclude that:

True Alice did not disclose any information.

This god is called a "simulator", and in the ideal world, the Zlice illusion that talks to you is actually a "simulator". In the ideal world, all things that can be perceived are simulators. "simulated" out.

Ok, here, we define the "zero knowledge" with the concept of "simulator".

Next, we began to enter the process of proving zero knowledge.

Distinguish between two worlds

(Save World State as Snapshot X)

The proven zero-knowledge process is equivalent to constructing (looking) a "simulation" algorithm that allows the simulator to simulate an ideal world of "no knowledge." If this algorithm exists and the two worlds are indistinguishable, then the proof is complete.

Wait, maybe "you" will feel that something is wrong.

If there is such an algorithm, and it can fool me without knowledge, then in the "real world", it is not ruled out that Alice also uses such an algorithm to deceive me. In this way, I am not deceived in both worlds. Then this interactive protocol is meaningless.

In fact, there is a key point here. Borrowing the stills in the movie "The Pirates of the Dream", something in the "ideal world" is different from the "real world". This thing is the key to distinguishing between the two worlds, and it wants us to "not be aware of". This thing is not a gyro in the dream, it is a kind of "super power", the super power of the simulator Simulator.

For example, such a super power: "back in time."

(The picture above is a stills of the film "The Day of the Groundhog". The hero of the play will return to the morning of February 2 every time he wakes up, so that he will live forever on the same day)

Wait, look at the officials, aren't we just discussing indistinguishability? How do the two worlds need to be distinguished? "I'm confused". Don't panic, the so-called indistinguishability is aimed at individual cognition in the ideal world. And "distinguishability" is for God outside the world.

Imagine that around us, if there is a person who sometimes has the ability to traverse, or if he can make time back to a year ago, then these ordinary people are completely meg (bi), and there is no perception. Then, if the "simulator" can achieve "time back" in the "ideal world" he constructed, then he can achieve some magical things, thus deceiving the "you" as a verifier, and can also fool The observer is "I". For "you", you understand that in the "ideal world", time can be regressed, but in the "real world", it is clear that Alice cannot have super powers. Although you and I can't distinguish in which world, but at least we know that in the "real world" of the two worlds, the opposite Alice can't deceive us . Of course, we can't say which world we are in. .

At this point, the "zero knowledge" of the interactive protocol has been proved. Do you understand? I will give you a clearer idea of ​​the following:

First of all, "zero knowledge" is to protect Alice's interests, because Alice does not want to reveal more information to Bob during the interaction process, does not want Bob to know the secret w she has, and does not even want Bob to analyze even the interaction process. A little bit of information. So how do you guarantee this? The "simulator" debuted at this time. It can simulate an "ideal world" that is exactly the same as the real world. Then the "simulator" can easily fool any opponent in this world, so that the other party can't tell that he is In the real world, it is still in the ideal world. Because the "simulator" does not have that secret w, the "ideal world" is zero-knowledge. And because of the indistinguishability of the two worlds, we can conclude that Alice's interaction protocol is "zero knowledge."

Let's look at a concrete example of the map 3 staining problem mentioned in the previous article [1].

Zero knowledge proof of map three staining problem

Recall the "Map 3 Dyeing Problem Interactive System":

  • The first step: Alice makes a complete replacement of the map's dyed answer, then puts all the vertices on the paper and gives it to Bob.
  • Step 2: Bob randomly picks a side
  • Step 3: Alice opens the piece of paper at the vertices of the specified edge. Bob checks if the colors of the two vertices are the same. If they are different, they pass. If they are the same, they fail.
  • Go back to the first step and repeat n times

We will then prove that the above interaction is zero-knowledge. Let's assume that the verifier Bob is honest, which helps everyone understand the proof process. Then we discuss again if Bob proves the method dishonestly.

In the "ideal world", talking to Bob is a "simulator" that simulates the whole world. Bob interacts according to the interactive protocol of the three-staining problem. The simulator doesn't have a three-stained answer, it simply grays out all the vertices.

First, the simulator mimics Alice and covers each vertex with a piece of paper. Then send it to Bob.

Bob randomly picked an edge and challenged the prover.

The simulator can't open the paper at this time because the color at both ends of the edge is gray.

At this time, the simulator has to play "super power". He uses the skills of time-lapse and returns to the first step of the dialogue.

The simulator is now in the first step. He dyes the ends of the bottom edge in any different color, then re-covers the paper and sends it to Bob.

Bob couldn't perceive that time had gone back to the first step. For him, everything was fresh. He "honestly" chose the bottom edge again.

At this point, the simulator can safely open the paper and let Bob check. Bob is obviously going to be cheated. Then Bob repeats the process in a round, and each time the simulator can fool Bob with a time-lapse.

So in the ideal world, the simulator doesn't have any "knowledge" of the three-stained answer, but it can also fool Bob, and in terms of probability, it is highly consistent with the observed interaction in the "real world." Probability distribution). So the above process shows the existence of the simulator's algorithm, which is equivalent to the "zero knowledge nature" of the interactive system .

Dishonest Bob

In the above proof process, there is a fairly strong assumption that Bob will choose the same edge after each time backflow. What if Bob changes a different side each time? It doesn't matter, if Bob chooses different edges after the simulator implements the first time backflow, then the simulator can mess up the color and run the time back again. After multiple times of backflow, Bob has a great probability. The edge where the simulator is dyed will be selected once, and then the simulator will go to the third step to open the paper.

Alibaba, cave and sesame open

Among the many Chinese popular science articles on the Internet that explain "zero knowledge proof", there is an example that is widely spread. This is the story of Alibaba and the robbers. Unfortunately, these different versions of the story are only half of the story. Then I will tell a different story about "Alibaba" and "Forty Thieves":

A long time ago, in a city called Baghdad, a person was called Alibaba. Every day, Alibaba will go to the market to buy things.

One day, Alibaba was robbed of a wallet by a thief, so he chased the thief all the way to a cave entrance, and then the thief disappeared. Alibaba found two roads inside the hole, as shown below.

Alibaba didn't know where the thief ran, so he decided to go to the "left" and looked at it. Soon Alibaba found that it was a dead end, and there was no trace of thieves. Then he went to the "right side" to check the road, it was also a dead end, no trace of thieves. Alibaba said to himself: "Where is the damn thief going?"

The next day, Alibaba went to the bazaar to buy things again. This time another thief grabbed his basket. Then Alibaba chased the thief to the same cave entrance yesterday, and then the thief disappeared. This time Alibaba decided to go first. Go to the "right" ramp and see if there are no thieves, then go to the left and see no thieves. This is so strange.

On the third day, the fourth day, …, the fortieth day, the same story was staged. Alibaba chased the forty thief to the mysterious hole, and the thief disappeared. Alibaba thought that there must be an organ in this cave, so he hid at the end of the "right" ramp and waited patiently for a long time. Then a thief ran in. After the end of the aisle, he read a spell "Sesame opens. "." At this time the wall actually opened, the thief ran in, and then the wall closed again. At this time another victim chased in and found it for a long time, nothing.

After Alibaba waited for them to leave, he experimented with the spell, which was very effective, and Alibaba found the wall leading to the "left" ramp. Later, Alibaba found a way to change the spell and wrote a new spell and the location of the cave on a piece of parchment.

Note: The story is not over here…. (Top caption) A long time later

Many years later, in the 1980s, Alibaba's parchment fell into the hands of several cryptographers. They ran to Baghdad and found the location of the cave. Even after centuries, the spells were still valid. The cryptographer excitedly opened the wall and ran between the two ramps.

A television station quickly learned about this bizarre event. A cryptographer, Mick Ali (similar to the cryptographer Micali), decided to show the TV viewer that he knew the spell. First, the TV presenter put the camera in the hole and let all People waited at the cave entrance, when Mick Ali entered the cave alone, and the host threw a coin to decide which way Mick Ali ran out. To commemorate Alibaba and the Forty Thieves, Mick Ali repeated forty times and succeeded every time.

The show was very successful. But soon, another TV station was jealous and wanted to make a similar show, but Mick Ali couldn't participate in the new show because of the exclusive agreement. How to do it? The host of the second TV station was born, and he found an actor who was very similar to Mick Ali, who mimicked Mick Ali in dress, posture and speech accent. Then they started shooting, and every time the host flipped the coin, the actor ran out, but obviously, the actor didn't know the spell and couldn't open the wall. So sometimes the actor happens to succeed, sometimes it fails, so the actor is very hard, repeating nearly a hundred times, only succeeded forty times. In the end, the new show host, the recorded video was edited , only the successful clips were retained, and the wrong clips were deleted. Then the new show and Mick Ali's show aired on the same channel at the same time. Then the audience can't tell which video is true and which video is fake. The host of the first TV station fully understood that Mick Ali was the one who really knew the wall's spell, but he could not pass this fact to the innocent audience.

Seeing here, do you feel the "simulation" slowly? Here the host of the second TV station is editing the video instead of "time back." He has externally intervened in the "ideal world," the world in which the content broadcast on television, has achieved the same effect. For the ideal world, this kind of editing is essentially a super power.

This story is actually derived from a paper "How to Explain Zero-Knowledge Protocols to Your Children" [3], published at the 1989 US Secret Conference.

Simulation and Turing machine

When it comes to super powers, do you think that this stuff is not scientific? Yes, if we use the "super power" to explain anything without our brains, then our logic can't be self-consistent. In the ideal world, the simulator can't be opened casually . For example, the simulator can't directly modify the internal state of Bob. For example, Bob clearly fails the verification in the verification step, but the simulator is tough to change the verification result to "accept". This will lead us to prove that "any interactive system is zero-knowledge", this erroneous conclusion.

The simulator is not the Almighty God in the ideal world

So what exactly can the simulator be? The simulator is really just a Turing machine. The so-called "time backwards" and the so-called super-capabilities of "clip video" are not the supernatural powers, but the functions that the Turing machine can achieve. Computer professional friends must have used VMWare, virtual machine and other software. The "simulator" in this article can be imagined as a "virtual machine" software, which can virtualize a computer environment. This virtual environment is ours. The "ideal world" in the text. How to explain "time backwards"? I don't know if you have used the "snapshot" function of the virtual machine software. When using snapshots, the virtual machine software can save all the state of the entire virtual machine, and then the virtual machine software can be returned at any time. Continue to run where you saved the snapshot.

Note: In fact, the so-called time reversal is a basic operation in the computer. In the theory of programming language, there is a concept called Continuation. Abstractly, Continuation represents calculations from now on to the future. Continuation is an explicit abstraction of the control flow, and goto, call-with-current-continuation, and even thread scheduling can be thought of as operators that operate on Continuation. For example, call/cc, which is call-with-current-continuation, can easily implement the "backtracking" function. Saving a snapshot can be understood as saving the current Continuation, and going back to the past is to apply this Continuation.

Regardless of Zlice or Bob, and every one of our observers, it is an executable program. These programs are copied to the virtual machine . The conversation between Zlice and Bob is actually the communication between the two programs. The observer is the program Hook is on Zlice and Bob process IO. In the above map three, the "ideal world" honest Bob is actually the Bob process calling the virtual machine's "random number generator", and this random number generator can be manipulated by Zlice. The "real world" is a computer environment in which virtual machine software runs externally.

Do you have another understanding? I will emphasize it again:

The process of proving zero knowledge is to find an algorithm, or more generally, to write a piece of code that runs on an external computer system, but implements the functions of a virtual machine. And in the virtual machine, you need to have a Zlice without "knowledge" as input, you can fool Bob into the virtual machine.

If you still don't understand my sentence above, please go back to the section "Differentiating the Two Worlds" and rethink the simulation. 😛 (Load World State from Snapshot X)

Plato's cave fables

Simulations are everywhere, and Gödel's incompleteness theorem uses the notion of simulation to simulate formal arithmetic with Godel Numbers. Turing proposed the concept of the "Universal Turing Machine", which can simulate itself.

But the earliest concept of "simulation" comes from the seventh volume of the book "Imperial Country" [4]. The ancient Greek philosopher Plato spoke of such a fable – Allegory of Cave:

Imagine that in a dark cave, there are a row of prisoners locked by chains, who can only see the walls in front when they are young. Behind these prisoners is a wall, and there is a pile of fire behind them. Between the fire and the wall, some people walk up and down with props and puppets, so that the props puppets will cast shadows on the walls under the fire map. And these prisoners can only look at these shadows all day long. Because these prisoners have been born from birth, what they have seen is just the shadows on the wall of the front wall. They will think that the shadows they see are the real world.

One day, however, a prisoner accidentally broke the chain and he looked back at the fire. But he could only see the faint shadow from small to large, and he saw the bright light for the first time. I saw props and puppets. If someone told him that these props and puppets were real objects, he would certainly sneer at it and insist that the shadow is real.

Plato hypothesized that if the prisoner was forced out of the cave and went outside to see the real world, the prisoner would be distracted by the fact that he would not adapt to the light of the real world, and he would be angry. But as he slowly adapted to the world, saw the sun, the trees, the river, and the starry sky, he gradually realized that the world was superior to the world in the cave. He no longer wants to go back to the dark cave life.

After a while, he was pity on the prisoners in the cave and wanted to bring them all out. But when he returned to the cave again, he could not see clearly because he had adapted to the bright world outside and returned to the cave. The prisoners who were locked thought that his vision was impaired, gibberish, and he was a madman. Finally, when he tried his best to take the prisoners out of the cave, they were killed by the prisoners.

This is the allegory of human destiny, similar to the row of prisoners locked by chains. We think that the eyes see the truth of the world, but in fact, it may be an illusion, just like the shadow cast on the cave wall. .

To be continued

This article introduces the key concepts needed to understand zero knowledge – simulation. Any zero-knowledge agreement can be understood by constructing an "ideal world." The first time readers of this concept need to be pondered over.

There are two methodologies in computer science that are crucial. The first is "abstract" and the second is "simulation."

Looking back at the map three-staining problem, Bob's dialogue between "ideal world" and "real world." Although Bob can't distinguish between the two worlds, one thing he can be sure is that Alice doesn't have super powers in the real world.

The problem is that Alice doesn't have super powers and can't directly prove that Alice really has the answer. In case this interactive protocol does not guarantee that Alice must have knowledge? "Zero knowledge" protects Alice's interests. Who will guarantee Bob's interests? This question is left to the next one.

references

[1] First knowledge of "zero knowledge" and "proof". Ambi Lab. 2019.

[2] Shafi Goldwasser and Silvio Micali, Probabilistic Encryption, Special issue of Journal of Computer and Systems Sciences, Vol. 28, No. 2, pages 270-299, April 1984.

[3]Quisquater, JJ, Quisquater, M., Quisquater, M., Quisquater, M., Guillou, L., Guillou, MA, Guillou, G., Guillou, A., Guillou, G. and Guillou, S. , 1989, August. How to explain zero-knowledge protocols to your children. In Conference on the Theory and Application of Cryptology (pp. 628-631). Springer, New York, NY.

[4] Plato and Wu Xianshu, 1986. Ideal Republic (Vol. 1, No. 986, p. 1). The Commercial Press.

[5] Goldwasser, Shafi, Silvio Micali, and Charles Rackoff. "The knowledge complexity of interactive proof systems." SIAM Journal on computing 18.1 (1989): 186-208.

[6] Oded, Goldreich. "Foundations of cryptography basic tools." (2001).

[7] Rackoff, Charles, and Daniel R. Simon. "Non-interactive zero-knowledge proof of knowledge and chosen ciphertext attack." Annual International Cryptology Conference. Springer, Berlin, Heidelberg, 1991.

[8] Goldreich, Oded, Silvio Micali, and Avi Wigderson. "Proofs that yield nothing but their validity or all languages ​​in NP have zero-knowledge proof systems." Journal of the ACM (JACM) 38.3 (1991): 690-728 .

[9] zkPoD: Blockchain, zero-knowledge proof and formal verification, achieving fair trade without intermediation, zero trust. Ambi Lab. 2019.

Author: Yu Guo

Source: Ambi Lab