This article is the latest blog released by the founder of Ethereum Vitalik on April 3, 2019, and the only blog he published in 2019.
What did Vitalik say in this latest blog titled "On Collusion"? George Xiaobian did the translation and finishing for the first time. There were too many dry goods, and the old irons had to finish reading .
#Special thanks to Glen Weyl, Phil Daian and Jinglan Wang for their review.
In the past few years, people have become more and more keen to use the economic incentives and mechanism design of intentional design to guide the behavior of participants in different scenarios. In the blockchain domain, the mechanism design first provides security for the blockchain itself, encouraging miners or equity verifiers to participate honestly, but recently it has been used to “predict the market”, “token management registry” and many other situations. in.
At the same time, the new radical revolutionary movement gave birth to experiments on Hagberg tax, second vote, and secondary financing. Recently, there has been increasing interest in using token-based incentives to encourage the release of high-quality posts on social media . However, as the development of these systems gradually moves from theory to practice, there are still some challenges that need to be resolved. I think these challenges have not been fully addressed.
The Chinese currency community “coin” is a good example . It recently released a token-based incentive to encourage people to write posts. The basic mechanism is that if users of the platform hold KEY tokens, they have the right to "bet" these KEY tokens on the article, and each user can perform upvotes k times per day (similar to likes, top posts), each The "weight" of the upvotes is directly proportional to the user's interest in the upvotes. Articles with more upvoting will become more and more prominent, and the KEY token rewards won by the author of the article will be roughly proportional to the KEY upvoting number of the article.
This is a very simplistic problem. The actual mechanism contains some non-linear factors, but these factors are not essential to the basic operation of the mechanism. The KEY token is valuable because it can be used inside the platform. The way is used.
This design is not unique, and motivating online content creation is something that many people care about, with many similar features and quite different designs. In this case, this particular platform has been heavily used:
A few months ago, reddit launched a somewhat similar experimental function on the Ethereum trading section /r/ethtrader, issuing a token called donuts (collectively referred to as "doughnuts") to users who commented and received support. A certain number of donuts are issued to the user each week and distributed in proportion to the upvotes obtained from the user reviews. These donuts can be used to purchase the banner's top banner content and can also be used to vote in the community.
However, unlike the KEY system, when A (user) upvotes B (user), the rewards obtained by B are not proportional to the supply of A's existing tokens. Instead, each Reddit account has the same ability to do for other Reddit accounts. Contribute.
This kind of experiment that attempts to reward high-quality content creators in a known way that abandons donations/small payments is very valuable. The lack of compensation for Internet users to create content is a very important issue facing the whole society. It is very encouraging that the encryption community tries to use the power of mechanism design to solve this problem. But unfortunately, these systems are vulnerable to attack.
Self-voting, consortium manipulation and bribery
Here's how to attack the above design from an economic perspective.
Suppose some very wealthy users get a certain amount of tokens n, and as a result each user's k upvotes are given to the receiver n*q (q may be a very small number here, such as q = 0.000001). Users only need to upvotes for their own other account, giving themselves a reward of n*k*q. Each user's "interest rate" per period is k*q, then the system crashes and this mechanism can't be completed. .
In fact, the currency mechanism seems to anticipate this and has some ultra-linear logic in which articles with more KEYs receive disproportionately larger rewards, which seems to encourage support for popular posts. Instead of self-improvement. In the token voting management system, this kind of superlinear logic is usually added to prevent self-voting from damaging the entire system; most DPoS programs have only a limited number of delegates, and for those who do not have enough votes to join one of the places, the reward is zero. The effect is similar . But these plans always bring two new weaknesses:
- Subsidize the Regal Group, because very wealthy individuals still get enough money to support themselves.
- Users can bypass these restrictions by bribing other users and voting through the whole vote.
Bribery attacks may sound a bit far-fetched, but in a mature ecosystem, they are actually much more realistic than they seem. In most cases, bribery occurs in the blockchain domain, and operators use a new euphemistic name to give the concept a friendly face: this is not a bribe, it is a “betched pool” of “dividing dividends” . Bribery can even be confused: Imagine some encrypted digital currency exchanges, which take a lot of effort to build a great user interface, or even try to collect profits, is it very conscience?
On the contrary, it uses the tokens deposited by the user to participate in various token voting systems. Inevitably, some people think that collusion within the team is a normal thing, such as a recent scandal about EOS DPoS:
Finally, there is the possibility of "negative bribery" , threatening participants by extortion and coercion, unless they act in some way within the mechanism.
In the /r/ethtrader experiment, the community decided to only allow locked donuts to vote for voting by worrying about changing the government's poll results by buying donuts. But there is also a cheaper attack than buying a donut: rent a donut.
If the attacker already holds ETH, they can use it on other platforms, such as Compound, in exchange for some tokens. These tokens give you the right to use for any purpose, including voting, and when they are done, they only need to make tokens. You can get the collateral by returning the loan contract. In each case, the facts prove that the problems surrounding bribery, accidental over-authorization, and broad and rich participants are inevitable.
Some systems attempt to mitigate the impact of the rich in the token vote by using an identity system. Take the /r/ethtrader donut system as an example. Although the governance survey is conducted by token voting, the mechanism for determining how many donuts (ie coins) you get is based on the Reddit account: one account upvote .
The ideal goal of the identity system is to make it easier for individuals to acquire an identity, but it is relatively difficult to obtain multiple identities. In the /r/ethtrader donut system, this is the Reddit account, which is the Github account for the same purpose in the Gitcoin CLR Matching Gadget. But identity, at least so far, is fragile…
I am totally obsessed with click farms,
Thousands of machines line up to create false interactions.
Oh, are you too lazy to make a bunch of mobile phones? Ok, maybe you are looking for this:
It looks like a super-rough website that may or may not deceive you.
You need to do your own research and application
It stands to reason that it is easier to attack these mechanisms by simply controlling thousands of fake identities , like a spouse, even more than trying to bribe others .
But remember, there are specialized criminal organizations that are far ahead of you. Even if all underground organizations are taken down, if we are stupid enough to create a system that makes this activity profitable, the hostile government will certainly create millions. Fake passport. This has not even mentioned the attack in the opposite direction, and the identity certification body tried to deprive them of their power by depriving the marginalized community of identity documents…
Considering that once multiple identities or even liquidity markets emerge, many mechanisms seem to fail in a similar way, and one might ask, is it because some deep-seated common factors have led to all of these problems?
I think the answer is yes, and the "common point" is this: in a model that a participant can collude, it is much more difficult to keep the ideal property than to maintain the ideal property in a model that a participant cannot collude with, and More likely it is completely impossible. Most people may already have some awareness of this; specific examples of this principle are built on well-established norms and laws that often promote competitive markets and limit price monopolies, voting, and bribery. But this issue is much more profound and more common.
In game theory that focuses on individual choice, each participant makes decisions independently and does not allow groups of agents to work together for the common good. It is mathematically proven that at least one stable Nash equilibrium point must exist in any game, and the mechanism designer has a very broad space to "design" the game to achieve a specific result. However, considering the cooperative game theory, there are a large number of games without any stable results, and the alliance cannot profit from these results.
If there are some fixed resource pools and some existing resource allocation mechanisms, and inevitably 51% of the participants can conspire to seize control of the resources, no matter what the current configuration is, there will always be some conspiracy , this pair Participants are beneficial. However, this conspiracy is in turn vulnerable to potential new conspiracy, possibly including former conspirators and victims…
It can be said that the fact that most games are unstable is seriously underestimated because it is a general simplified mathematical model used to explain why there may be no "end of history" in politics, and no system proves to be completely satisfactory. I personally think it is much more useful than the famous Aro's theorem.
There are two ways to solve this problem.
1. Limit yourself to the "no identity" and "collusion security" game categories so that we don't have to worry about bribery or identity issues.
2, direct attack identity and collusion resistance problem, and solved well enough, can be used to achieve a non-collusion security game with richer attributes.
No-identity and collusion safety game design
Even if the workload proves to be collusion safe, until the boundary of a single participant with a total function of about 23.21%, and through clever engineering, this limit can be increased to 50%. Before a relatively high limit, competitive markets are reasonably conspiracy and security, which is easy to achieve in some cases, but not in others.
In terms of governance and content management, a well-functioning primary mechanism is described as "governing by predicting the market." The way the futarchy mechanism works is that the "voting" they perform is not only an expression of opinion, but also a prediction that rewards those who make real predictions and punish those who make false predictions.
For example, in the proposal for " predictive markets for content curation DAOs," I suggest a semi-centralized design where anyone can upvote and downvote (not optimistic) on the submitted content, and the upvote content is more visible. There is an "audit team" responsible for making the final decision. For each post, there is a small probability (proportional to the total amount of upvotes + downvotes for that post) that the review team will be asked to make a final decision on the post.
If the review team approves a post, all those who vote on it will be rewarded, and everyone who votes for it will be punished. If the review team rejects a post, the opposite happens; this mechanism Participants are encouraged to vote in favor and against, "predicting" the judgment of the review team.
Another example of a futarchy example is a governance system with a token project. If a vote wins, anyone who votes for a decision is obliged to purchase a certain amount of tokens before the vote begins; this guarantees a bad The decision to vote is expensive. If a bad decision wins a vote, each person who supports the decision must basically buy out the rest of the project. The high cost reduces or even eliminates the possibility of a cheap bribery attack. Sex.
Create two markets that represent the "possible future world" and pick one of them at a better price.
However, what this type of mechanism can do is limited. In the content management example above, we didn't really solve the governance problem. We just extended the functionality of the governance gadget, which has been considered trusted. One can try to replace the “adjustment panel” with a token price forecasting market that represents the right to buy advertising, but in reality, price is an overly noisy indicator, and no other decision can be achieved except for very few very large decisions. at this point. Usually the value we try to maximize is not representative of a token.
Let us see more clearly why, in more general cases, we cannot easily determine the value of governance decisions by affecting the price of tokens. Unfortunately, a good mechanism for identifying good or bad public goods cannot be No identity or collusion is safe. If a person tries to protect the attributes of a game from identity, it is no longer important to establish an identity, but only a system in which tokens are important, then it is impossible to create a legal public product and over-subsidize the rich. trade off.
The argument is as follows. Suppose an author is producing a public product (such as a series of blog posts) that provides value to each member of a 10,000-person community. Suppose there is a mechanism in which community members can give the author a dollar of income in some way. Unless the community members are extremely selfless, the cost of taking this action must be well below $1, otherwise the community members who support the author will receive far less benefits than the support authors , so the system will become unsupported. The author's group tragedy, and there must be a way for the author to earn $1 at a cost well below $1.
But now assume that there is also a fake community consisting of 10,000 vest accounts of the same wealthy attacker. This community takes all the same actions as the real community, except that they don't support the author, but support another fake account, which is also the attacker's vest.
If a member of the "real community" is likely to give the author $1 at a personal cost far below $1, the attacker may repeatedly give himself $1 at a cost well below $1, thereby exhausting all of the system's funds. Without the right safeguards, any mechanism that can help the parties with genuine coordination to coordinate will also help the parties that have already coordinated to over-coordinate and extract money from the system.
A similar challenge arises when the goal is not funding, but rather determining what should be most visible. What do you think can get more dollar support?
A legal high-quality blog post that benefits thousands of people and benefits everyone?
Those who have recently focused on "in the real world" politics may also point to another content that favors highly centralized actors: hostile government manipulation of social media . In the final analysis, both the centralization system and the decentralization system face the same fundamental problem. The “thought market” is far from what economists usually call “effective markets”, which leads to the production of public goods even in “peacetime”. Not enough, but also vulnerable to active attacks. This is just a problem.
This is why a token-based voting system (such as a currency) has a major real advantage over an identity-based voting system (such as the GitcoinCLR or /r/ethtrader donut experiment): at least a large number of purchase accounts are not beneficial because No matter how many accounts are distributed between tokens, everything you do is proportional to how many tokens you have.
However, relying solely on any identity model and relying solely on the token mechanism can fundamentally not solve the problem of centralization benefits over decentralized communities trying to support public goods; a non-identity mechanism that empowers distributed communities cannot avoid over-authorization. Regal pretends to be a distributed community.
But not only identity issues, but also public product games are vulnerable, which is also a bribe. To understand why, consider the above example, but the "fake community" is not the attacker's 10001 vest, the attacker has only one identity, the account that accepts funds, the other 10,000 accounts are real users, but receive each pen Users who make a $0.01 bribe will take action that could result in an additional $1 for the attacker.
As mentioned above, these bribery can be highly confusing even by voting for delegates in exchange for convenient third-party supervision services. In terms of the design of "voney voting", confusing bribes are even easier: people can pass the market Rent tokens and use them to vote to achieve this.
Therefore, while certain types of games, especially predictive markets or margin-based games, can use collusion security and identitylessness, broad public goods financing appears to be a problem of the inability to use identityless collusion and security methods.
Conspiracy to resist and identity
Another option is to attack the identity issue positively. As mentioned above, simply adopting a more secure centralized identity system, such as passports and other government identifications, does not work on a large scale; in the context of full incentives, these systems are very insecure and vulnerable to development. Test the government's attack! Instead, the "identity" we discuss here is a powerful multi-factor statement that the participants identified through a set of messages are actually a unique individual. This early prototype of network identity can be said to be the Social Key Recovery of the HTC blockchain phone:
The basic idea is that your private key is secretly shared between up to five trusted contacts, in this way mathematically ensuring that three of them can recover the original key, but two or fewer cannot. This is an "identity system" where your five friends define who is trying to recover your account.
However, it is a special-purpose identification system that attempts to solve a problem: personal account security, which is different from trying to identify a unique identity. That is to say, the general model of mutual claims between individuals is likely to be directed to some more powerful identity model. If necessary, you can use the futarchy mechanism described above to augment these systems: If someone claims that someone is a unique human, and others disagree, and both parties are willing to sign a contract to file a lawsuit, the system can summon a panel to determine who is right. .
But we also want another vital asset: we want an identity that you can't convincingly rent or sell. Obviously, we can't stop people from reaching a deal: "You give me 50 dollars, I will give you my key", but what we can do is to increase the credibility of the transaction, so as to avoid the seller easily deceiving the buyer and buy it. The home has a key that does not actually work. One way to achieve this is to establish a mechanism by which the owner of the key can send the transaction that revoked the key and replace it with another key chosen by the owner, all in one Unable to prove the way.
Perhaps the easiest way to solve this problem is to use the trusted party that runs the calculation and only publish the results, or to decentralize the same functionality through multiparty calculations. Such an approach does not completely solve the collusion problem. A group of friends can still get together and sit on the same sofa to coordinate the vote, but they will at least reduce it to a manageable level without causing these systems to fail completely.
There is a further problem: the initial allocation of the key. What happens if a user creates their identity in a third-party custodial service, then stores the private key and uses it to secretly vote? This will be a secret bribe, which is to provide users with convenient services through voting rights. More importantly, if the system is secure, it successfully prevents bribery and makes third-party-led secret voting undetectable. The only way to solve this problem seems to be face-to-face verification.
For example, there can be an "issuer" ecosystem where each issuer issues a smart card with a private key that users can immediately download to their smartphone and send a message with a different one that is not revealed to anyone. The key replaces the previous key. These issuers may be at a meeting or at a meeting, or an individual who is trusted by certain voting mechanisms.
Building the infrastructure of possible anti-collusion mechanisms, including a strong decentralized identity system, is a formidable challenge, but if we want to unlock the potential of this mechanism, we must try our best. If we want to expand the role of a similar voting mechanism, including more advanced forms such as secondary voting and secondary financing, we have no choice but to face the challenge, try very hard, and hope to succeed for at least some of the use cases. Create something that is safe enough.
Author | Vitalik Buterin
Edit | George
Produced | Blockchain Base Camp (blockchain_camp)