Opinion If OpenAI were a DAO, could it avoid this governance farce?
Could OpenAI Avoid This Governance Farce if it Operated as a DAO?Author: Wang Chao
From the moment this weekend’s big drama began, some people have been suggesting that OpenAI should create a DAO. As the plot unfolds, more and more people have started to hold this view. Can this whole governance debacle be avoided if OpenAI really adopts a DAO model? I believe it can. Not because DAO governance has any obvious advantages, but because OpenAI’s governance has major flaws. If they had learned anything from the DAO jungle, this situation would never have occurred.
OpenAI is a nonprofit organization dedicated to creating safe AGI (artificial general intelligence) that benefits all of humanity equally. You could say that OpenAI is an organization that creates public goods. Many DAOs are also organizations that create public goods, so in many ways, OpenAI and DAOs are already very similar.
- Founders of Blur launch interest-bearing Layer2 Blast Replicating the airdrop gameplay of Blur points, the test network will go live in January next year, initiating activities.
- Institutional funds pouring in $1 billion into the crypto market, how will Bitcoin halving affect subsequent investments?
- After graduating from Ant Group, they flock to Web3
DAOs come in various forms, and here we're only comparing common nonprofit DAOs. This doesn't represent all types of DAOs.
The recent internal turmoil at OpenAI is actually not rooted in its organizational structure, but rather in the lack of clear and reasonable governance rules, leaving room for manipulation. For example, the board of directors was originally composed of 9 members, but with some directors leaving, there are now only 6. As the highest authority, the board of directors failed to timely replenish directors. If the board is reduced to only 3 members, then with the agreement of just two individuals, they can decide the fate of OpenAI. The handling of specific matters has also been quite arbitrary, such as the replacement of CEO Sam Altman, which clearly did not undergo discussion and deliberation by the entire board, but was decided by a few directors in a closed-door meeting without fully considering the opinions of other stakeholders and without providing appropriate communication and negotiation opportunities.
Even for profit-oriented public companies, the introduction of independent directors is necessary to increase transparency in corporate governance and better represent the interests of non-controlling shareholders and the general public. For an important organization like OpenAI, which is critical for the development of foundational technology, social security, and even the fate of humanity, although they have introduced external directors, this system clearly has not played its intended role. OpenAI’s board of directors not only needs to incorporate more checks and balances, such as employee representatives, but also needs to establish more effective governance mechanisms. Referring to the governance model of DAOs and designing a more solid, transparent, and inclusive governance structure for OpenAI, I believe, is a worthwhile proposal to explore.
It is worth pondering that when DAO was initially proposed, it was the hope of technolibertarians to rely entirely on code to form a self-consistent system that operates autonomously, minimizing human interference. When political coordination dependent on humans emerges within DAO, DAO is no longer DAO but becomes DO, losing its autonomy. However, in the current stage, the idealized DAO is simply not achievable. So, as a compromise, everyone considers organizations that rely on blockchain networks for collective governance as DAO. This means that we have accepted the reality of human governance, with code constraints only serving as an aid. The biggest characteristic of DAO has changed from being autonomous to representing broader interests and participation opportunities as Community Driven.
Similarly, AGI’s goal is also to pursue autonomy. OpenAI explicitly mentions in its organizational structure that AGI refers to a highly autonomous system that outperforms humans in most economically valuable work.
“By AGI, we mean a highly autonomous system that outperforms humans at most economically valuable work.” – OpenAI
Although autonomy in AGI mainly refers to the level of behavioral ability, if we think from a deeper underlying principle, both AGI and DAO aim to form a truly autonomous system that operates without external control, which is fundamentally no different. So, how should we govern such an autonomous system? Should we rely more on intrinsic human values alignment and training, or should we impose more external constraints? From LLM to AGI, these are all pressing issues that require thought.
The most recent plot twist in this great drama of OpenAI is that up to 90% of its employees have signed to resign and follow Sam. This echoes a classic debate in the DAO field in the past few years – whether rule constraints are more important or community consensus is more important.
Although rules and constraints can form many consensuses, truly great consensuses are often not forged by rules alone. Only with a shared sense of mission and cultural values can deep resonance and unity be truly achieved.
We know how to create this kind of resonance among humans. But what about AI?
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- Is the ‘big thunderstorm’ in the encryption industry resolved? Binance may pay a $4 billion fine to the US Department of Justice for settlement.
- Crypto Shakeup: Altman’s Ousting and the World of Worldcoin
- 1984 A Timeless Masterpiece that Deserves a Revisit
- Emmett Shear becomes the new CEO of OpenAI. Which Web3 projects has he invested in?
- PSE Trading Who is taking the risk for non-clearing DeFi protocols?
- Are all the short-term ETF benefits gone, and is the bull market over?
- The new situation of GMX V2 Increasing liquidity and the long-short imbalance of the GM pool under the influence of the Arbitrum STIP plan