Vitalik My Technological Optimism – Why Do People Call Me a d/accists?

Vitalik My Belief in Technological Progress and the Misconception of Being a Dystopian/Cynic

Author: Vitalik, Co-founder of Ethereum; Translator: LianGuai0xjs

Last month, Marc Andreessen published his “Manifesto of Techno-optimism,” advocating for reigniting the passion for technology and using markets and capitalism as means to build a brighter future through technological advancements. The manifesto explicitly rejects the stagnant ideology that fears progress and prioritizes preserving the status quo. It has garnered significant attention, with response articles from Noah Smith, Robin Hanson, Joshua Gans (mostly positive), and Dave Karpf, Luca Ropek, Ezra Klein (mostly negative), among many others. James Pethokoukis’ “Conservative Futurist” and LianGuailladium’s “It’s Time to Build for Good” share similar themes but are unrelated to this manifesto. This month, we saw similar debates in the OpenAI controversy, involving discussions on the dangers of super artificial intelligence and the potential risks of OpenAI’s rapid development.

Personally, my feelings towards techno-optimism are warm yet nuanced. I believe that due to technology’s transformative nature, the future will be brighter than the present, and I have faith in humanity and human nature. I reject the mentality that the best we can strive for is a world similar to today but with less greed and more public healthcare. However, I believe that not only the extent of technological advancements is important but also the direction. Certain types of technology have a more reliable ability to make the world better, and the development of some types of technology can mitigate the negative impacts of others. The world has an exaggerated focus on certain technological development directions while neglecting others. We need active human intent to choose the direction we desire because the formula of “maximizing profit” will not automatically lead us there.

geM4T6aWVXmFJdLr95AGdaTOKszgzvX3CT1UUDWB.png

4fjMlL8Ew4TIKyliRoQG3ZblHO4wdrBF3XBRrHSl.png

In this article, I will discuss what techno-optimism means to me. This includes a broader worldview that motivates me to work on certain types of blockchain and cryptographic applications, social technologies, and other scientific fields that interest me. However, the perspective on this broader question also impacts artificial intelligence and many other domains. The rapid progress of our technology may become the most significant social issue of the 21st century, so careful consideration of them is crucial.

Table of Contents

1. Technology is amazing, but the cost of delaying it is very high

Second, artificial intelligence is fundamentally different from other technologies and deserves special caution.

Third, d/acc: Defense (or decentralization, or differentiation) acceleration.

Fourth, what is the path forward for superintelligence?

Fifth, is d/acc compatible with your existing beliefs?

Sixth, humans are the brightest stars.

First, technology is amazing, and the cost of delaying it is extremely high.

In some circles, people generally underestimate the benefits of technology and see it primarily as a source of dystopia and risk. In the past half century, this has often been due to concerns about the environment or concerns about “benefits only benefiting the rich, and the rich will consolidate power over the poor.” Recently, I have also seen liberals expressing concerns about certain technologies because they are worried about the centralization of power that these technologies could lead to. This month, I did some polls and asked the following question: If a technology needs to be restricted because it is too dangerous to be freely used, would they prefer it to be monopolized or delayed by ten years? Surprisingly, in three platforms and three choice of monopolizers, there was overwhelming support for delaying.

Therefore, sometimes I worry that we are overcorrecting, and many people are missing the counterarguments: the benefits of technology are indeed enormous, and on the axes we can measure, the benefits far outweigh the drawbacks, and the cost of even a ten-year delay is incredibly high.

Let’s take a specific example and look at a life expectancy chart:

19Q7CwSgNHEAhq4jRpojA1yv0zaV27CutEJhgtq1.png

What do we see? Great progress was made in the last century. This is true all over the world, whether in historically wealthy and dominant regions or in poor and exploited regions.

Some people blame technology for creating or exacerbating disasters such as authoritarianism and war. In fact, we can see the number of deaths caused by wars on the chart: one in the 1910s (World War I) and one in the 1940s (World War II). If you look closely, tragedies like the Spanish flu and the Great Leap Forward, which are non-military, can also be seen. But the chart clearly shows one thing: even such terrible disasters are dwarfed by the tremendous progress made in food, hygiene, healthcare, and infrastructure throughout the century.

This is also reflected in the significant improvements in our daily lives. Thanks to the internet, most people in the world can easily access information that was unavailable 20 years ago. The global economy has become more convenient due to improvements in international payments and finance. Global poverty is rapidly decreasing. With online maps, we no longer have to worry about getting lost in cities, and we can now conveniently order a ride home if needed. Our assets have become digitized, and physical goods have become cheaper, which means we have much less fear of physical theft. Online shopping has narrowed the gap in accessing goods between major cities and other parts of the world. Automation has brought us benefits that are always underestimated, making our lives more convenient in various ways.

These improvements, whether quantifiable or not, are huge. In the 21st century, even bigger advancements are likely to occur soon. Today, ending aging and disease may seem utopian. But from the perspective of computers since 1945, the modern era of integrating chips into almost everything might also seem utopian: even sci-fi movies often depict computers as big as rooms. If advances in biotechnology in the next 75 years are as plentiful as those in computers in the past 75 years, the future could be even more impressive than anyone’s expectations.

At the same time, arguments expressing skepticism towards progress often veer towards the dark. Even medical textbooks, like this one from the 1990s (credited to Emma Szewczak’s discovery), sometimes make extreme claims, denying the value of two centuries of medical science and even suggesting that saving human lives evidently has no benefits:

hHIXeQkSuWTFDx2x5KmCuIlKeJ67FXVOO2qSZ4tn.png

The “Limits to Growth” theory was proposed in the 1970s and argued that ever-growing populations and industries would eventually deplete Earth’s finite resources, ultimately leading to China’s one-child policy and India’s massive forced sterilization campaign. In earlier times, concerns about overpopulation were used to justify large-scale killings. These long-debated viewpoints since 1798 have been proven wrong.

It is precisely for these reasons, as a starting point, that I find myself deeply uneasy with arguments that slow down technological or human progress. Given the interconnectedness between all industries, even industry slowdowns carry risks. Therefore, when I write content similar to what I will discuss later in this article, deviating from the public enthusiasm for progress in whatever form it takes, these statements are made with a heavy heart. However, the 21st century is different and unique enough that these subtle differences are worth considering.

That said, in the broader perspective, one important subtle difference must be noted, especially when we go beyond “technology as a whole is good” and delve into the theme of “which specific technologies are good?” Here, we need to address an issue that concerns many people: the environment.

The significance of the environment and the intention to coordinate

In almost everything, the trend over the past century has been towards improvement, with one major exception being climate change:

93GBWqSSS6z48FKGzOfazQ9nWdnpQOhfDvI8p6lH.png

Even the pessimistic scenario of continuously rising temperatures will not lead to the true extinction of humanity. However, this situation may cause more deaths than a large-scale war and severely damage the health and livelihoods of people in already vulnerable regions. Research from the Swiss Reinsurance Institute suggests that the worst climate change scenarios could decrease the GDP of the world’s poorest countries by up to 25%. This research indicates that life expectancy in rural areas of India could be shortened by a decade compared to normal conditions, and studies like these suggest that by the end of this century, climate change could cause an additional one billion deaths.

These are all big issues. As to why I am optimistic about our ability to overcome these challenges, my answer is twofold. First, after decades of hype and wishful thinking, solar energy has finally reached a turning point, and supporting technologies such as batteries have made similar progress. Second, we can look at humanity’s track record in solving previous environmental problems. Take air pollution, for example. Let’s meet the past dystopia: the Great Smog of London in 1952.

TLuflHowqTmHYDpdpjDnpNibjFUVgCHHJNJvAMpQ.png

What happened since then? Let’s consult our data world once again:

Cqc8g1gghCio9E8mOVB4VuzolYtyUULnXgy3SgoY.png

It turns out that 1952 was not even the peak: in the late 19th century, higher concentrations of air pollutants were just being accepted and considered normal. Since then, we have experienced a century of continuous rapid pollution decline. I witnessed this firsthand when visiting China: in 2014, the air pollution was severe, estimated to decrease life expectancy by over five years, which was considered normal. But by 2020, the air seemed to have become clean and comparable to many Western cities. This is not our only success story. Forest areas are increasing in many parts of the world. The acid rain crisis is improving. The ozone layer has been recovering for decades.

To me, the moral of this story is that usually the N version of our civilization’s technology indeed causes problems, and the N+1 version fixes it. However, this doesn’t happen automatically; it requires deliberate efforts. The ozone layer is recovering because we made it through international agreements such as the Montreal Protocol. Air pollution is improving because we improved it. Similarly, solar panels have not gotten better by chance; they are part of the planned energy technology tree. Solar panels have become increasingly better because over decades people became aware of the importance of addressing climate change, which motivated engineers to tackle the problem and prompted companies and governments to fund their research. It is conscious action, coordinated through public discourse and culture, that shapes the views of governments, scientists, philanthropists, and businesses, not a heartless “technocapitalist machine.”

2. Artificial Intelligence is fundamentally different from other technologies and deserves special caution

Many of the condescending attitudes towards artificial intelligence come from the viewpoint that it is “just another technology”: belonging to the same category as social media, encryption, contraceptives, telephones, airplanes, guns, printing, and the wheel. These things obviously have significant social impact. They are not merely isolated improvements for individual welfare: they fundamentally change cultures, alter power balances, and harm those who heavily rely on previous orders. Many people oppose them. Overall, pessimism often leads to incorrect outcomes.

But we can think about what artificial intelligence is in a different way: it is a new form of thinking that is rapidly gaining intelligence and is likely to surpass human intelligence, becoming the new top species on Earth. This kind of transformation is much less common: it might include humans surpassing monkeys, multicellular life surpassing single-celled life, the origin of life itself, and perhaps even machines surpassing humans in terms of physical labor during the industrial revolution. Suddenly, it feels like we are in uncharted territory.

The existence of risks is a big deal

Making mistakes in artificial intelligence is (almost) the worst possible way to make the world worse: it could actually lead to human extinction. This is an extreme statement: although worst-case scenarios like climate change, pandemics, or nuclear war could cause significant harm, there are still many intact islands of civilization that could clean up the mess. But if superintelligent artificial intelligence decides to be adversarial towards us, there may be no survivors and it could effectively terminate the human race. Even Mars might not be safe.

One major concern revolves around instrumental convergence: for the broad objectives that a superintelligent entity might have, artificial intelligence can naturally take two intermediate steps to achieve them better, namely (i) consuming resources and (ii) making sure of its own security. Earth has vast resources, and humans pose a foreseeable threat to the safety of the planet. We can attempt to give AI explicit goals that are aligned with human values and ensure human well-being, but we don’t know how to actually achieve this or how to stop it completely when the AI encounters unforeseen circumstances. Hence, we have a problem.

BJ4tN2Frwl9HFxIXQmfPED4Vz25SRXxFiSVUWvoX.jpegMIRI researcher Rob Bensinger tries to illustrate estimates of the possibility of different people’s expectations of artificial intelligence killing everyone or doing almost equally bad things. Many positions are based on rough estimates of people’s public statements, but many other positions also offer their precise estimates; and quite a few give “extinction probabilities” higher than 25%.

A survey of machine learning researchers in 2022 found that, on average, researchers believe there is a 5-10% chance that artificial intelligence will truly kill all of us: roughly the same probability as non-biological causes of death such as accidents.

This is a speculative hypothesis, and we should all be cautious about speculative hypotheses involving complex multi-step stories. However, these arguments have been scrutinized for over a decade, so they seem at least worth a little concern. But even if you’re not worried about literal extinction, there are other reasons to be afraid.

Even if we survive, is the future of superintelligence the world we want to live in?

Many modern science fiction novels are dystopian and have terrible depictions of artificial intelligence. Even non-science fiction novels that attempt to envision possible futures for artificial intelligence often offer quite unappealing answers. So I’ve been asking this question around: What are the descriptions of the future of superintelligence, whether in science fiction or other forms, that we want to live in? The most common answer so far has been Iain Banks’ Culture series.

The Culture series portrays a far-future interstellar civilization primarily inhabited by two types of beings: normal humans and superintelligent artificial intelligences called “Minds”. Humans are enhanced, but only subtly so: medical technology theoretically allows for indefinite human lifespans, but most people choose to only live for around 400 years, seemingly because they get bored with life at that point.

On the surface, human life seems idyllic: comfort, health taken care of, a wide range of entertainment options, and a positive coexistence with the Minds. However, upon deeper reflection, one problem emerges: it seems the Minds have complete control, and humans in the story are merely pawns for the Minds, acting on their behalf to carry out tasks.

Quoting Gavin Leech’s “Against the Culture”:

Humans are not the protagonists. Even though there seems to be a human protagonist in the books doing serious and grand things, they are actually agents of the artificial intelligence. (One of the only exceptions is Zakalwe, because he can do things the Minds don’t want to do, immoral things.) “Thoughts in the Culture don’t need humans, but humans need to be needed.” (I’d argue that only a small fraction of humans need to be needed—or rather, only a small fraction need it enough to give up many comforts. Most people’s scale of life isn’t that grand. Still, it’s a good critique.)

What humans are involved in are simulations of risks. Virtually anything they could do, a machine could do better. What can you do? If you fall off a cliff while rock climbing, you can order the Mind not to catch you—just because; you can delete the backup of your mind and truly take the risk. You can also leave the Culture and join some old-fashioned, unfree civilization of “strong evaluations”. Another option is to spread freedom by joining Contact.

I believe that even the roles given to humans in the culture series are an extension. I asked ChatGPT (who else asked?) why humans were given the roles they were given instead of “minds” doing everything themselves, and I personally found its answer quite bland. It seems difficult to have a “friendly” super-intelligent world dominated by artificial intelligence, where humans are nothing more than pets.

B2ut9aIxyehZqt6fWJ1ya6EZBiGBMSkgu4MFSkDf.pngA world I don’t want to see.

Many other science fiction series assume a world with super-intelligent artificial intelligence, but they accept the commands of (unenhanced) biological human masters. “Star Trek” is a good example, showcasing a vision of harmony between interstellar spaceships with artificial intelligence “computers” (and Data) and human operators. However, this feels like an extremely unstable balance. The world of “Star Trek” currently appears idyllic, but it’s hard to imagine its vision of the relationship between humans and artificial intelligence as anything more than a transitional phase, where starships will be completely controlled by computers and no longer troubled by spacious corridors, artificial gravity, and climate control.

Humans, who give commands to super-intelligent machines, are far less intelligent, and they have access to less information. In a universe where there is any degree of competition, civilizations that relegate humans to a secondary status will surpass those that stubbornly insist on control. Furthermore, the computers themselves may seize control. To understand why, imagine you are legally a slave to an eight-year-old child. If you were able to have a long conversation with the child, do you think you could convince them to sign a piece of paper granting you freedom? I haven’t conducted this experiment, but my instinctive answer is yes. In conclusion, the idea of humans becoming pets seems like a difficult one to escape.

Skynet is near, the emperor is everywhere

In China, there is a saying “the heavens are high and the emperor is far away,” which captures a fundamental fact about the limitations of political centralization. Even in a nominally vast authoritarian empire, in practice, particularly when the empire is large in scale, the influence and attention of the leadership are actually constrained, so the leadership must delegate to local agents to carry out its will, weakening its ability to execute its intentions, and thus there is always some degree of actual freedom in certain places. Sometimes, this may have negative consequences: a remote state lacking consistent principles and laws may create space for local strongmen to engage in theft and oppression. But if centralized power does evil, the actual limitations of attention and distance can restrict the extent of its wrongdoing.

With artificial intelligence, it’s no longer the same. In the 20th century, modern transportation technology weakened the constraints of centralized power by reducing the limitations of distance. The colossal totalitarian empires of the 1940s were partly a result of this. In the 21st century, scalable information collection and automation may mean that attention is no longer a limitation. The consequences of completely eradicating the natural limitations of government could be terrifying.

Digital authoritarianism has been on the rise for a decade, and surveillance technology has provided powerful new strategies for suppressing opposition to authoritarian governments: allow protests to happen but then discreetly hunt down the participants. More generally, what I’m concerned about is that if OpenAI can serve over 100 million customers with just 500 employees, similar management technology could also allow a political elite of 500 people, or even a board of directors of 5 people, to maintain an iron grip. Covering an entire nation. With modern surveillance to collect information and modern artificial intelligence to interpret that information, there may be nowhere to hide.

Things get even worse when we consider the consequences of artificial intelligence in warfare. Here is a translation of a famous 2019 Sohu post that I copied:

“‘No need for political thought work and war mobilization’ primarily refers to the fact that the highest commanders in a war only need to consider the course of the war itself, much like playing chess, without worrying about what ‘knights’ and ‘rooks’ are doing on the board at this moment. War becomes a purely technological competition.”

On a deeper level, “political thought work and war mobilization” requires anyone launching a war to have a justifiable reason. The idea of a justifiable reason has for thousands of years restrained the legitimacy of warfare in human society, and its significance cannot be underestimated. Anyone who wants to start a war must find at least one seemingly valid reason or excuse. You might say that this limitation is weak because historically, it has often been just an excuse. For example, the true motivations behind the Crusades were plunder and territorial expansion, but they were carried out in the name of God, even if the targets were devout believers in Constantinople. However, even the weakest constraint is still a constraint! This excuse effectively prevented warmongers from fully pursuing their goals without restraint. Even someone as malicious as Hitler couldn’t directly launch a war; he had to spend years convincing the German people that the noble Aryan race needed to fight for its living space.

Today, “mutual human surveillance” serves as an important check on the power of dictators to launch wars or oppress their own citizens. It has prevented nuclear war, allowed the Berlin Wall to open, and saved lives in atrocities like the Holocaust. If armies were made up of robots, this check would completely disappear. A dictator could get drunk at 10 PM, get angry at people on Twitter at 11 PM, and a fleet of invading robots could unleash hellfire upon the civilians and infrastructure of neighboring countries before midnight.

In the past, there were always remote corners, where the sky was high and the emperor was far away, where opponents of the regime could regroup and hide, and eventually find a way to make things better. In the 21st century, artificial intelligence could potentially enable authoritarian regimes to maintain sufficient surveillance and control over the world, effectively “locking” it down.

3. d/acc: Defense (or Decentralization, or Differentiation) Acceleration

In the past few months, the “e/acc” (Effective Accelerationists) movement has gained significant support. “Beff Jezos” summarizes it as a recognition of the enormous benefits technology advancement truly brings and the desire to accelerate this trend to bring those benefits faster.

I find myself agreeing with the e/acc perspective in many cases. There is ample evidence that the FDA is overly conservative in delaying or preventing drug approvals, and bioethics often seems to follow the principle of “20 deaths in medical experiments gone wrong is a mistake.” A tragedy, but 200,000 deaths due to delayed life-saving treatments are just a statistic. The delays in approving COVID-19 tests, vaccines, and malaria vaccines seem to further confirm this. However, this perspective may go too far.

Besides concerns related to artificial intelligence, I’m particularly conflicted about e/acc’s enthusiasm for military technology. In the current context of 2023, the United States is manufacturing this technology and immediately applying it to defend Ukraine, making it easy to see how it could be a force for justice. However, from a broader perspective, the enthusiasm for modern military technology as a force for good seems to require believing that in most conflicts now and in the future, the technologically dominant power will reliably be one of the good guys: military technology is good because it’s built and controlled by the United States, and the United States is good. Does being an e/acc necessitate being a maximalist for the United States and betting everything on the current and future moralities of the government and the success of the nation?

On the other hand, I think new approaches are needed in thinking about how to mitigate these risks. OpenAI’s governance structure is a good example—it seems like a well-intentioned effort to balance the need for profitability to satisfy the desires of the initial investors who provided capital, with a desire for checks and balances to prevent OpenAI from blowing up the world. However, in practice, their recent attempt to dismiss Sam Altman makes the structure look like a complete failure—it centralizes power in an undemocratic and unaccountable board of five, which makes critical decisions based on secret information and refuses to disclose any details about the reasoning behind those decisions. Somehow, the board of a nonprofit organization has been performing so poorly that employees of the company have formed a de facto union…against the billionaire CEO.

In general, I see too many plans to save the world, which involve giving extreme and opaque power to a small group of people and hoping they use it wisely. Therefore, I find myself drawn to a different philosophy, one that has detailed ideas on how to deal with risks, but seeks to create and maintain a more democratic world and tries to avoid centralization as the preferred solution to our problems. This idea is also broader than artificial intelligence, and I think it applies even in a world where concerns about AI risks are essentially baseless. I will refer to this philosophy as “d/acc.”

VhZAdsR0WU9BYlYfm7QxzLarGK0GJevvQREQZjzN.png

dacc3

The “d” here can represent many things; specifically, defense, decentralization, democracy, and difference. First, let’s consider defense, and then we can look at how this relates to other interpretations.

A world favorable to defense contributes to the thriving of healthy and democratic governance

A framework for thinking about the macro consequences of technology is to consider the balance between defense and offense. Broadly speaking, certain technologies make it easier to attack others: to do things against their interests that they feel compelled to respond against. Others make defense easier, even without relying on large centralized parties.

There are many reasons why a world favorable to defense is a better world. First is, of course, the direct benefits of security: fewer deaths, less economic value destroyed, less time wasted on conflict. But less appreciated is that a world favorable to defense makes it easier for healthier, more open, and freer forms of governance to thrive.

An obvious example is Switzerland. Switzerland is often seen as the closest thing in the real world to a utopia of classical liberal governance. Power is heavily devolved to cantons (“states”), major decisions are made by citizen vote, and many locals don’t even know who the president is. How can such a country withstand challenging political pressures? Part of the reason is excellent political strategies, but another major part is its mountainous geography, which is highly favorable to defense.

5I96hoO0ogcn70Hiq720ZiYTv7UOhQkortDoWAxC.png

Flags are a big advantage. But so are mountains.

James C. Scott’s recent book “The Art of Not Being Governed” provides a famous description of the stateless society in Zomia, which is another example: they were able to maintain their freedom and autonomy largely due to their mountainous terrain. Likewise, the Eurasian steppe is the opposite of a governance utopia. Sarah LianGuaiine’s discussion of maritime powers versus continental powers also makes similar points, although the emphasis is on water as a defense barrier rather than mountains. Indeed, the combination of the convenience of voluntary trade and the difficulty of involuntary invasion seems to be a common feature for both Switzerland and island nations and appears to be an ideal choice for human prosperity.

When providing advice for secondary fundraising experiments within the Ethereum ecosystem, I came across an interesting phenomenon: specifically, the Gitcoin Grants funding rounds. In the fourth round, some of the highest-earning recipients were Twitter influencers, which was seen as positive by some and negative by others, leading to a small scandal. My interpretation of this phenomenon is that there is an imbalance: secondary fundraising allows you to indicate that you believe something is in the public interest, but it does not indicate whether something is a public nuisance. In extreme cases, a completely neutral secondary fundraising system would provide funding for both sides of a war. Therefore, for the fifth round, I suggest that Gitcoin should include negative contributions: you pay $1 to decrease the amount of funds a given project receives (and implicitly redistribute it to all other projects). The result: many people hate it.

yVt4puoFu50VeGbRVpK42QhDtACD6TTTnoz5NO7K.png

One of the many internet memes that circulated after the fifth round.

In my opinion, this seems to be a microcosm of a larger pattern: creating decentralized governance mechanisms to address negative externalities is a very difficult problem in society. The preferred example of decentralized governance gone wrong is mob justice, and for good reason. There are aspects of human psychology that make dealing with negative emotions trickier and more prone to errors than dealing with positive emotions. That’s why even in highly democratic organizations, decisions about how to deal with negative impacts are often made by a central committee.

In many cases, this difficulty is one of the deep reasons why the concept of “freedom” is so valuable. If someone says something offensive to you or there’s a lifestyle you find disgusting, the pain and disgust you feel is real, and you might even find that it’s worse than getting physically beaten up. However, trying to reach a consensus on what types of offensive and disgusting behavior can be acted upon in society may come at a greater cost and danger than simply reminding ourselves that some weirdos and jerks are the price we pay for living in a free society.

However, there are times when the “smile and bear it” approach is not practical. In such cases, another answer that is sometimes worth considering is defensive technology. The more secure the internet is, the less we need to violate people’s privacy and engage in improper international diplomatic strategies to pursue every hacker. The more personalized tools we can build to block users on Twitter, browser tools to detect fraud, and collective tools to differentiate between misinformation and truth, the less we need to fight against censorship. The faster we produce vaccines, the less we need to chase after super-spreaders. Such solutions may not be applicable in all areas – we certainly don’t want a world where everyone has to wear real bulletproof vests – but in areas where we can build technology that makes the world more conducive to defense, it is of tremendous value.

This core idea, that certain technologies are beneficial for defense and should be promoted, while other technologies that are beneficial for offense should be prevented, is rooted in the literature of effective altruism, but with a different name: differentiated technology development. Researchers at the University of Oxford have outlined this principle well since 2022:

on3Uf52Z7IvmqmFqgmVRJ5L5dN70QLA145FnwDhC.png

Figure 1: Mechanisms of differentiated technology development to reduce negative social impact.

Classifying technology as offensive, defensive, or neutral inevitably has flaws. Just like “freedom,” people can argue whether the policies of a social democratic government reduce freedom by imposing heavy taxes and coercing employers, or increase freedom by reducing the concerns of ordinary people about various risks. Similarly, there are some technologies that fall on both ends of the spectrum in defense. Nuclear weapons are advantageous for offense, but nuclear energy contributes to human prosperity and is neutral in terms of offense and defense. Different technologies may play different roles within different time frames. However, just like “freedom” (or “equality” or “rule of law”), the fuzzy nature of the edges is more of an opportunity to better understand its subtle differences rather than an argument against the principle.

Now, let’s see how we can apply this principle to a more comprehensive worldview. We can divide defensive technologies into two domains, just like any other technology: the atomic world and the bits world. In turn, the atomic world can be divided into microscopic (biological, later nanotechnology) and macroscopic (what we typically consider as “defense,” but also includes resilient physical infrastructure). I will divide the bits world along different axes: how difficult it is to determine who the attacker is in principle. Sometimes it’s easy; I call it network defense. Other times it’s more challenging; I call it information defense.

zJGUpP4pZ5ultR8WMxZp3og2EZ5jdVCuvf1c33rm.png

Macroscopic Physical Defense

The most underestimated defensive technology in the macroscopic realm is not even Iron Dome (including the new Ukrainian system) and other counter-technology and anti-missile military hardware; it is resilient physical infrastructure. Most deaths from a nuclear war could come from supply chain disruptions rather than the initial radiation and explosions, and low infrastructure internet solutions like Starlink have been vital for maintaining Ukraine’s connectivity over the past year and a half.

Building tools to help people survive independently or semi-independently, or even live comfortably, seems like a valuable defense technology that has proven to have minimal risk in terms of offense.

The pursuit of making humanity a multi-planetary civilization can also be seen from the perspective of d/acc: allowing at least some of us to live self-sufficiently on other planets can enhance our ability to resist the terrible things that happen on Earth. Even if the entire vision is temporarily proven unfeasible, the development of self-sustaining forms of life necessary to make this project possible is likely to contribute to increasing the resilience of our civilization on Earth.

Microphysical Defense (also known as Biological Defense)

The novel coronavirus is still a cause for concern, especially due to its long-term impact on health. However, the COVID-19 pandemic is not the last epidemic we will face. Many aspects of the modern world can lead to more forthcoming epidemics:

  • Higher population density makes it easier for viruses and other pathogens to spread in the air. Epidemics are relatively new in human history, most of them originating from urbanization thousands of years ago. Continued rapid urbanization means that population density will further increase in the next half-century.

  • The increase in air travel means that pathogens in the air can rapidly spread globally. Rapid affluence means that air travel is likely to increase even more in the next half-century, and complexity models indicate that even small increases can have significant impacts. Climate change may further exacerbate this risk.

  • Domestication and factory farming of animals are major risk factors. Measles may have evolved from a bovine virus less than 3,000 years ago. Today, factory farms are also breeding new strains of influenza (and exacerbating antibiotic resistance, which affects human innate immunity).

  • Modern biotechnology makes it easier to create new, more toxic pathogens. Laboratory leaks have been occurring, and tools are rapidly advancing, making it easier to deliberately create extremely lethal viruses, even synthetic viruses (zombie proteins). Deliberate pandemics are particularly concerning, partly because, unlike nuclear weapons, they are unattributable: you can release a virus, and no one will know who created it. It is now possible to design a genetic sequence and have it synthesized in a wet lab, and have it delivered to you within five days.

In this field, CryptoRelief and Balvi, two organizations that have received significant funding thanks to the unexpected acquisition of a large number of Shib coins in 2021, have been established and have been very active. CryptoRelief initially focused on addressing the immediate crisis and has recently been establishing a long-term medical research ecosystem in India, while Balvi has been focused on experimental projects to enhance our ability to detect, prevent, and treat the novel coronavirus and other airborne diseases. Balvi insists that the projects it funds must be open-source. Inspired by the 19th-century sanitation movements that defeated cholera and other waterborne pathogens, it has funded a range of technological projects that can make the world inherently more resistant to airborne pathogens (see: Updates 1 and Updates 2), including:

  • Development of Far UVC irradiation

  • Air filtration and quality monitoring in locations such as India, Sri Lanka, and the United States

  • Cheap and effective decentralized air quality testing devices

  • Research on the causes and potential treatment options of the novel coronavirus (the main cause may be simple, but clarifying the mechanisms and finding treatment methods is difficult)

  • Vaccines (such as RaDVaC, PopVax) and vaccine injury research

  • A new set of non-invasive medical tools

  • Early detection of epidemics using open-source data analysis (such as EPIWATCH)

  • Testing, including inexpensive molecular rapid tests

  • Biosecurity masks suitable when other methods fail

Other promising areas include pathogen wastewater monitoring, improving building filtration and ventilation, and better understanding and mitigating the risks of poor air quality.

By default, we have the opportunity to create a world that is more resilient to airborne transmission of diseases, whether natural or man-made. This world would have a highly optimized pipeline where we can go from a pandemic starting, to automatic detection, to people worldwide having access to targeted, locally manufactured, and verifiably open-source vaccines or other preventative measures, delivered through nebulization or nasal spray (meaning self-administered without the need for needles), all within a month. Simultaneously, better air quality would greatly reduce the spread of many epidemics.

Imagine a future without resorting to societal hammer-and-tong measures – no mandatory or even worse measures, no risk of poorly designed and implemented forced measures that might make things worse – because public health infrastructure is integrated into the fabric of civilization. These worlds are possible with a moderate amount of funding towards biodefense. If the development is open-source, freely accessible to users, and protected as a public good, the work will progress more smoothly.

Cyber Defense, Blockchain, and Cryptography

Cybersecurity professionals widely agree that current computer security is in a terrible state. That being said, we often underestimate the progress made. Anyone capable of breaching user wallets can anonymously steal hundreds of billions of dollars worth of cryptocurrency, although the amount lost or stolen is much greater than I initially imagined, the majority of cryptocurrencies have not been stolen in the past decade. Recently, there have been some improvements:

  • Trusted hardware chips within user smartphones, effectively creating a smaller, highly secure operating system within the phone that can remain protected even if the rest of the phone is compromised by hackers. In many other use cases, such chips are increasingly being explored as a way to manufacture more secure cryptocurrency wallets.

  • Browsers as de facto operating systems. Over the past decade, there has been a quiet shift from downloadable applications to applications within the browser. This has largely been achieved through WebAssembly (WASM). Even long-held barriers like Adobe Photoshop, which was seen as a major reason why many people couldn’t practically use Linux due to its necessity and incompatibility with Linux, are now friendly to Linux because they reside within the browser. This is also a significant security advantage: while browsers do have their flaws, they typically have more sandboxing than installed applications, preventing access to arbitrary files on the computer.

  • Hardened operating systems. GrapheneOS for mobile devices already exists and is very useful. Desktop version QubesOS also exists; although, in my experience, its current usability is slightly worse than Graphene, but it is improving.

  • Exploring beyond passwords. Unfortunately, passwords are difficult to protect because they are hard to remember and easy to eavesdrop. Recently, more and more people have started to devalue passwords and make hardware-based multi-factor authentication truly effective.

However, the lack of network defense in other areas has also led to major setbacks. The need to prevent spam has resulted in a very oligopolistic email practice, making it very difficult to self-host or create new email providers. Many online applications, including Twitter, require users to log in to access content and block VPN IPs, making it more difficult to access the internet in a privacy-protecting manner. Software centralization also carries risks, as “weaponized interdependence”: modern technology tends to bypass centralized chokepoints, and operators of these chokepoints use this power to collect information, manipulate outcomes, or exclude specific participants – a strategy that currently seems to be employed even against the blockchain industry itself.

These are worrying trends, as they pose a threat to one of my biggest hopes: that the future of freedom and privacy, despite deep trade-offs, may still be bright. David Friedman predicted in his book “The Imperfect Future” that we might get a compromised future: the real world will be subjected to increasing surveillance, but through cryptography, the online world will preserve and even improve its privacy. Unfortunately, as we have seen, this counter-trend is far from guaranteed.

This is where I emphasize the importance of blockchain and cryptography technologies, such as zero-knowledge proofs. Blockchain allows us to create economic and social structures through “shared hard drives” without relying on centralized participants. Cryptocurrencies allow individuals to save and conduct financial transactions, just like using cash before the advent of the internet, without relying on trusted third parties with the power to change their rules at will. They can also serve as a backup anti-witch mechanism, making attacks and spam costly even for users without or unwilling to disclose their true spatial identities. Account abstraction, especially social recovery wallets, can protect our cryptographic assets as well as potential future assets without excessive reliance on centralized intermediaries.

Zero-knowledge proofs can be used for privacy, allowing users to prove things about themselves without revealing private information. For example, encapsulating a digital passport signature in a ZK-SNARK to prove that you are the unique citizen of a specific country/region without revealing which citizen you are. Such technology can allow us to maintain the benefits of privacy and anonymity – attributes widely seen as necessary for applications like voting – while still obtaining security assurances and combating spam and bad actors.

C0VttAFNEsIEcByAfzn7IYczW5lx7aB4AiSRlRdu.png

The proposed design of the ZK social media system allows for auditing operations and the ability to punish users without needing to know anyone’s identity.

Earlier this year, ZuLianGuaiss incubated at Zuzalu was a great example in practice. This is an application that has been used by hundreds of people at Zuzalu and recently by thousands of people at Devconnect for ticketing. It allows you to hold tickets, memberships, (non-transferable) digital collectibles, and other proofs without compromising your privacy. For example, you can prove that you are the only registered resident of Zuzalu or a Devconnect ticket holder without revealing any information about your identity. These proofs can be presented in person through QR codes or digitally to log in to applications like Zupoll, an anonymous voting system exclusively available to Zuzalu residents.

These technologies are a great example of the d/acc principle: they allow users and communities to verify credibility without compromising privacy and protect their security without relying on centralized bottlenecks that impose their own definition of good and bad. They protect users or services by creating better and fairer methods than the commonly used technologies today, for example, discriminating against entire countries considered untrustworthy, thus improving global accessibility. These can be very powerful primitives that may be necessary if we want to maintain a decentralized vision of information security in the 21st century. More extensive research on cybersecurity technologies can make the Internet more open, secure, and free in crucial aspects in the future.

Defense of Information

As I described, defense of information is when rational people can easily reach a consensus on who the attacker is. If someone tries to hack your wallet, it’s easy for people to think of the hacker as the bad guy. If someone tries to carry out a DoS attack on a website, it’s easy for people to think of them as malicious and morally different from regular users trying to access the website’s content. In other cases, the boundaries are more blurred. In these cases, the tools to improve our defense that I call “defense of information” are better.

Take fact-checking (also known as preventing “fake news”) as an example. I’m a big fan of Community Notes, which has done a lot of work in helping users identify the authenticity of other users’ tweets. Community Notes uses a new algorithm, which doesn’t show the most popular notes, but rather notes approved by users from various political factions.

Nq8JCLSTRSf1R5UDSKx2pl4Xg3eh4F2J1mntHaUe.png

Actual application of Community Notes.

I’m also a fan of prediction markets, which can help identify the importance of events in real-time before things settle down and reach a consensus on which direction it will go. Sam Altman on Polymarket has been very helpful, providing useful summaries of hourly disclosures, negotiations, and their final outcomes, offering much-needed context for those who only see individual news without understanding the importance of each one.rAXHCUZaQcDK06eNxNjqdz0ITdaIEdgoW2br1vTF.pngPrediction markets often have flaws. But Twitter KOLs who confidently express what they think will happen next year often have even greater flaws. There is still room for further improvement in prediction markets. For example, a major practical flaw in prediction markets is that, except for the most significant events, trading volumes for all other events are low. One natural direction to address this issue is to establish prediction markets driven by artificial intelligence.

In the field of blockchain, I believe we need more specific types of information defense. That is to say, wallets should be more determined and proactive in helping users determine the meaning of the content they are signing, and protect them from fraud and scams. This is an intermediate case: what is a scam, what is not a scam, is more subjective than controversial social events, but more subjective than distinguishing legitimate users from DoS attackers or hackers. Metamask already has a scam database and automatically blocks user access to scam websites:

U0FkiboWj0FfOxy0lmfBbeD9UhngGOniM5NgrhsV.png

Applications like Fire are examples of further development. However, security software like this should not be something that needs to be explicitly installed; it should be part of encrypted wallets or even browsers by default.

Due to its more subjective nature, information defense is inherently more collective than network defense: you need to integrate into a large group of complex people to identify what may be true or false, and what types of applications are deceptive Ponzi schemes. Developers have the opportunity to go further in developing effective information defense and strengthening existing forms of information defense. Things like Community Notes can be included in browsers, covering not only social media platforms, but the entire internet.

Social technology beyond the “defense” framework

To some extent, I can reasonably blame myself for cramming some of these information technologies into the “defense” category. After all, defense is meant to help well-intentioned actors avoid harm from malicious actors (or, in some cases, from nature). However, some of these social technologies are meant to help well-intentioned participants form consensus.

pol.is is a good example, using algorithms similar to Community Notes (and predating Community Notes) to help communities identify points of consensus between subcultures that otherwise have divergent views in many aspects. Viewpoints.xyz is inspired by pol.is and operates in a similar spirit:

orQkPQEkdQ7pSjehSzSDndSQEFKlpDICxY6xAVWp.png

Such technologies can be used for more decentralized governance of controversial decision-making. Similarly, the blockchain community is a great testing ground, and these algorithms have already demonstrated their value. In general, decisions on which improvements (“EIPs”) to make to the Ethereum protocol are made by a fairly small group in a meeting called “All Core Devs’ Call”. This approach works quite well for highly technical decisions that most community members don’t have strong emotions about. But for more important decisions that affect the protocol’s economics, or more foundational values like immutability and censorship resistance, it’s often not sufficient. As early as July 2016, tools like Carbonvote and social media voting helped the community and developers find direction during a time of controversy surrounding decisions such as the implementation of the DAO fork, reducing issuance, and (not) unfreezing the LianGuairity wallet, when the community was mostly faced with a directional dilemma.

PJrnhPkSjtsqV5IljC8dvyC0CCl8LksFjWZ7bdAl.png

DAO Fork Voting on Carbonvote.

Carbonvote has its flaws: it relies on ETH holdings to determine who is a member of the Ethereum community, leading to results dominated by wealthy ETH holders (“whales”). However, with the help of modern tools, we can use multiple signals (such as POAP, ZuLianGuaiss seals, Gitcoin passports, Protocol Guild memberships, and various signals like ETH holdings, even individually staked ETH) to create a better Carbonvote that measures community membership.

Any community can use such tools to make higher quality decisions, find common ground, coordinate (physical or digital) migrations, or do many other things without relying on opaque centralized leadership. This is not defense acceleration in itself, but it can definitely be considered democratic acceleration. These tools can even be used to improve the governance of key participants and institutions in the field of artificial intelligence and democratize it.

So, what is the path forward for superintelligent AI?

All of the above is great and can make the world of the next century more harmonious, secure, and free. However, it has not addressed the elephant in the room: superintelligent artificial intelligence.

Many who are concerned about AI propose a default path forward that essentially leads to a minimal global AI world government. Recent versions of this plan include a proposal for a “Transnational AGI Alliance” (“MAGIC”). Such an alliance, if formed and successfully achieves the goal of creating superintelligent AI, would naturally become a de facto minimal world government. Over the long term, there are ideas such as the “crucial action” theory: we create an AI that carries out a single irreversible action that rearranges the world into a game, from which point on, humans still remain in charge, but the gameboard is somewhat more favorable for defense and conducive to human flourishing.

So far, the main practical problem I have seen is that people seem to not really trust any specific governance mechanism that is capable of building such a thing. This fact becomes obvious when you look at the results of my recent Twitter poll, where I asked whether people would prefer to see AI monopolized by a single entity that is ten years ahead or AI being delayed for everyone by ten years:

JiK9bFed8QA0jUpS00QKEXDeCYEaUzYRcxmrfHq4.pngThe sample size of each poll is small, but the consistency of the results derived from various sources and options compensates for that. In all cases, most people prefer to see highly advanced AI postponed completely by ten years rather than being monopolized by a single group, be it a company, government, or multinational organization. In seven out of nine cases, the delay wins by at least two to one. This seems to be an important fact that needs to be understood for anyone pursuing AI regulation. Current approaches have largely focused on formulating licensing plans and regulatory requirements, attempting to restrict AI development to a few hands, but these approaches have met with widespread resistance because people do not want to see any one group monopolize something so powerful. Even though top-down regulatory proposals lower the risk of extinction, they also increase the potential for some kind of permanent lock-in to authoritarianism. Is it more welcome to completely ban advanced AI research (perhaps except for biomedical AI) with agreements that combine measures such as mandating open sourcing models that were not banned, as a way to reduce profit motives while further improving access equality?

“The preferred approach of opponents of the ‘let’s let a global organization do artificial intelligence and govern it really well’ route is multi-AI: deliberately trying to ensure that there are many people and companies developing a large number of AIs, so that no problems arise. No AI is more powerful than other AIs. In theory, by doing this, even if AI becomes super intelligent, we can maintain a balance of power.”

This philosophy is interesting, but I am concerned that the “polytheism” experience within the Ethereum ecosystem is inherently unstable. In Ethereum, we deliberately try to decentralize many parts of the stack: ensuring that no single codebase controls more than half of the proof-of-stake network, attempting to counterbalance the dominance of large staking pools, improving geographical decentralization, and so on. Fundamentally, Ethereum is actually attempting to achieve an ancient liberal dream: to establish a market-based society that relies on social pressure rather than government as an antitrust regulator. To some extent, this has worked: the dominance of the Prysm client has dropped from over 70% to below 45%. But this is not some automatic market process: it is the result of human intent and coordinated action.

My experience in Ethereum reflects the lessons learned from the broader world, where many markets have been proven to be natural monopolies. The situation is even more unstable with superintelligent AI acting independently of humans. Due to recursive self-improvement, the most powerful AI could quickly take the lead, and once AI becomes more powerful than humans, there is no force that can push things back into balance.

Furthermore, even if we do end up with a multi-AI world comprised of superintelligent AI and eventually stabilize, we still face another problem: we end up with a universe where humans become pets.

A happy path: Merging with AI?

Another option I have recently heard is to reduce the focus on AI as something separate from humans and instead focus more on tools that enhance human cognition rather than replace it.

AI drawing tools are an example of this direction. Nowadays, using the most famous AI-generated image tool, creating an image only takes one step, where humans provide input and then the AI takes over completely. Another option is to focus more on AI versions of Photoshop, where artists or AI can use these tools to create early drafts of images and then collaborate to improve them through real-time feedback.

dZgLlFweSg5QRitQVqmTsQ2v9rHlsBA0tSVS5qOY.png

AI Fill in Photoshop, 2023. I’ve tried it, it takes some time to adapt, but the results are actually quite good!

Another direction with a similar spirit is the concept of an open organizational architecture, which suggests dividing different parts of AI “thinking” (such as planning, executing plans, and interpreting information from the external world) into separate components and introducing human feedback between these parts.

So far, this sounds pretty ordinary and almost everyone agrees that having it would be great. Economist Daron Acemoglu’s work is far from this kind of AI futurism, but his new book “Power and Progress” hints at the hope for more AI of this kind.

But if we want to further infer the idea of human and AI collaboration, we will arrive at more radical conclusions. Unless we establish a powerful enough global government capable of detecting and preventing small groups of people from hacking individual GPUs with laptops, someone will eventually create a super-intelligent AI, an AI that thinks a thousand times faster than us. And no combination of humans using tools by hand will be able to resist this situation. Therefore, we need to deepen and intensify the concept of human-machine collaboration.

The first natural step is brain-computer interfaces. Brain-computer interfaces can allow humans to access increasingly powerful forms of computation and cognition more directly, shortening the bi-directional communication loop between humans and machines from several seconds to milliseconds. This will also greatly reduce the cost of “mental labor” for having computers help you gather facts, provide suggestions, or execute plans.

Admittedly, the later stages of such a roadmap will become strange. In addition to brain-computer interfaces, there are various ways to directly improve our brains through biological innovations. The ultimate further step, merging these two paths, may involve uploading our minds to run directly on computers. This will also be the ultimate goal of physical security: protecting ourselves from harm will no longer be the challenging problem of protecting the inevitably soft human body, but rather a much simpler data backup problem.

8yS5yVjcsA71otAPZggES0CIdAGIjm1OUhjTlxNR.png

Sometimes, such directions can cause concerns, partly because they are irreversible and partly because they may bring more advantages to powerful people than the rest of us. Brain-computer interfaces are particularly dangerous – after all, we are talking about literally reading and writing people’s thoughts. These concerns are why I believe the ideal choice to lead the way on this path is the security-centric open-source movement rather than closed proprietary companies and venture capital funds. Furthermore, all these issues faced by superintelligent AI that operates independently of humans are even more serious compared to enhancements closely related to humans. The rift between “enhanced” and “non-enhanced” is already present due to restrictions on who can and cannot use ChatGPT.

If we want a future that is both super intelligent and “human,” where humans are not just pets but actually retain meaningful agency in the world, it feels like this is the most natural choice. There are good arguments for why this might be a safer path to coordinating artificial intelligence: by incorporating human feedback at every step of decision-making, we reduce the incentive to transfer high-level planning responsibility to the AI itself, thereby reducing the risk of AI doing things that are completely at odds with human values.

Another argument in favor of this direction is that it may be more socially acceptable than simply shouting “halt AI” without providing supplementary information on alternative paths forward. The current mindset requires a philosophical shift, that engaging with technological progress concerning humans is dangerous, but progress divorced from humans is assumed to be safe. But it has a huge counter-subjective advantage: it gives developers something to do. Today, the main message conveyed by the AI safety movement to AI developers seems to be “you should stop.” People can engage in comparative research, but today it lacks economic incentives. In contrast, the common e/acc information “you were a hero all along” is understandable and very appealing. The d/acc message of “you should build, and build things that are profitable, but do so with more selectivity and consciousness to ensure what you are building can help you and humanity thrive” may become the winner.

5. Is d/acc compatible with your existing beliefs?

  • If you are e/acc, then d/acc is a subspecies of e/acc – just a more selective and intentional subspecies.

  • If you are an effective altruist, then d/acc is a rebranding of effective altruism for differential technology development, albeit with a stronger emphasis on freedom and democratic values.

  • If you are a libertarian, then d/acc is a subspecies of technological libertarianism, although it is a more pragmatic libertarianism that is more critical of the “technology capital machine” and is willing to accept government intervention today (at least if cultural intervention doesn’t work) to prevent worse unfreedom tomorrow.

  • If you are a Glen Weyl-style pluralist, then d/acc is a framework that easily incorporates a stronger emphasis on better democratic coordination technology valued by pluralists.

  • If you are a public health advocate, then the d/acc idea can be a source of a broader long-term vision and provide an opportunity to find common ground with “technologists” where otherwise you might feel in conflict with “technologists.”

  • If you are a blockchain advocate, then d/acc is a more modern and broader narrative than the past 15 years of emphasis on malign inflation and banks, incorporating blockchain as one of the many tools in a concrete strategy for a brighter future.

  • If you are a solarpunk, then d/acc is a subspecies of solarpunk, also emphasizing intentionality and collective action.

  • If you are a lunarpunk, then you’ll appreciate how d/acc emphasizes information defense by maintaining privacy and freedom.

Sixth, Humans are the Brightest Stars

I love technology because it expands human potential. Ten thousand years ago, we could create some handmade tools, alter the plants growing on a small piece of land, and build basic houses. Today, we can construct towers that are 800 meters tall, store all of humanity’s recorded knowledge in portable devices, communicate instantly worldwide, double our lifespan, and live happy and fulfilling lives without worrying that our best friends will frequently die from disease.

XET2fttQ0HcOAXnmlY1jSiiMDWOujB9Omruec7Eq.png

We started from the bottom, and now we’re here.

I believe these things are great and expanding human influence to planets and stars is also great because I believe in the greatness of humans. In certain circles, it’s popular to be skeptical of this: the voluntary human extinction movement believes that the Earth would be better off without humans, and more people would prefer to see fewer humans seeing the brightness of this world in the centuries to come. People widely regard humans as evil because we deceive and steal, engage in colonization and war, and abuse and exterminate other species. My response to this way of thinking is a simple question: compared to what?

Yes, humans can be vile, but more often we show kindness and compassion and work together for common interests. Even during times of war, we often make efforts to protect civilians—certainly not enough, but far more than what we did 2000 years ago. The next century may bring widely adopted non-animal meat, eliminating the greatest moral catastrophe humans are currently party to. Non-human animals are not like this. Cats, out of moral principle, don’t go around entirely refusing to eat mice. The sun is getting brighter each year, and in about a billion years, it will make the Earth too hot to sustain life. Did the sun ever consider that it caused a mass extinction event?

Therefore, I firmly believe that among all things in the known and observed universe, we humans are the brightest stars. The only thing we know for sure is that, imperfect as we may be, we sometimes genuinely care about “good” and adjust our behavior to better serve “good.” Two billion years from now, if any part of Earth or the universe still possesses the beauty of Earthly life, human means like space travel and planetary engineering will make it a reality.

vauiHZJjmomBuXSRnDAQdy4fkFXmqUdRb8cfF0Eg.png

We need to build and accelerate. But there is a very practical question to ask: What is the goal we are speeding towards? The 21st century is likely to be the crucial century for humanity, determining the fate of our future for thousands of years. Are we caught in one of the many traps we cannot escape from, or have we found the path towards a future that preserves freedom and agency? These are challenging questions. But I look forward to watching and participating in the great collective effort of humanity as we seek answers.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Web3

Trust Wallet, a Web3 company, achieves historic feat by becoming the first to obtain global privacy certifications.

Trust Wallet has made history as the first company to obtain international certifications for both security and priva...

Blockchain

Oasys Takes the Blockchain Gaming Industry by Storm with Listing on DappRadar

Oasys has enthusiastically chosen DappRadar as its premier platform for showcasing its cutting-edge DApps and games, ...

Blockchain

Traditional Finance Titans Embrace Blockchain: A Groundbreaking Collaboration

MAS, JPMorgan, and Apollo demonstrate the potential of blockchain-based tokenization in asset management.

Market

The Future of Swiss Banking: St. Galler Kantonalbank and SEBA Bank Join Forces in the Crypto Universe

St. Galler Kantonalbank (SGKB), Switzerland's fifth-largest cantonal bank, has entered the digital assets market thro...

Market

Bitfinex Securities: Tokenized Bonds to the Moon!

Next month, Bitfinex Securities will list an exciting new tokenized bond called ALT2611. Stay tuned for more details ...

Market

Breaking News: Aurory Hack Rocks Solana-based Gaming Ecosystem!

Fashionista, you may be interested to know that Aurory, a fashion-forward gaming platform on Solana, has experienced ...