a16z Dialogues with Solana Co-founders People Should Try to Create Greater Ideas Instead of Replicating Existing Ones
a16z Dialogues with Solana Co-founders Focus on Creating Greater Ideas, Not Replicating Existing OnesOriginal Title: Debating Blockchain Architectures (with Solana)
Host: a16z crypto General Partner Ali Yahya, a16z crypto Trading Team Partner Guy Wuollet Guest: Solana Labs CEO, Solana Co-Founder Anatoly Yakovenko Compilation: Qianwen, ChainCatcher
“But what I want to say is that people should try to create greater ideas instead of repeating what already exists. The best metaphor I’ve heard is that when people discover cement, everyone focuses on building bricks with cement, and then someone thinks, I can build a skyscraper. They came up with a way to combine steel, cement, and architecture, which no one could have imagined. The new tool is cement. You just need to figure out what a skyscraper is and then go build it.”
In this episode, a16z crypto talks to Solana Labs Co-Founder and CEO Anatoly Yakovenko, who previously worked at Qualcomm as a senior engineer and engineering manager.
- Arthur Hayes Post Even if my judgment on the Federal Reserve is wrong, I still believe that cryptocurrencies will rise significantly.
- NEAR’s number of users and transaction volume skyrocketed, possibly influenced by the token reward program of KaiKai, a chain upgrade.
- LianGuaiWeb3.0 Daily | LianGuai Launches Cryptocurrency to USD Exchange Service
Summary
-
The ultimate goal of decentralized computing
-
The concept behind Solana
-
Differences between Solana and Ethereum
-
The future development of blockchain
-
Web3 community and development
-
Talent recruitment for Web3 startups
The ultimate goal of decentralized computing
a16z crypto: First, I want to know your perspective on the ultimate goal of decentralized computing. How do you view blockchain architecture?
Anatoly Yakovenko: My stance is quite extreme. I believe that settlement will become less and less important, just like in traditional finance. You still need someone to provide guarantees, but these guarantees can be achieved in many different ways. I think what is truly valuable to the world is having a globally distributed, globally synchronized state, which is the real challenge. You can think of it as the role of Google’s search index for Google, or Nasdaq for the financial market.
From a macro perspective, blockchain systems are permissionless, programmable, and highly open, but there is still some kind of market at the back of the stack. For all these markets, achieving global synchrony at speeds approaching the speed of light is very valuable because everyone can use it as a reference. You can still operate local markets, but if there is fast synchronization of global prices, global finance will become more efficient. I believe this is the ultimate goal of blockchain, to synchronize as many states as possible at the speed of light.
a16z crypto: If cryptocurrencies and blockchain achieve mainstream adoption, what will be the biggest driving force for activities on the blockchain?
Anatoly Yakovenko: I think the form will still be similar to Web2, but it will be more transparent and realize the vision of the long tail distribution. There will be various small-scale companies on the Internet that can control their own data, rather than a few dominant players as it is now (although these big companies are also doing great things). I think in the long run, creators should have more control and more autonomy in publishing, and be able to achieve a truly meaningful internet with broad distribution and markets.
a16z crypto: Another way to think about or propose this question is how to balance it. You said you think settlement will become less important in the future. I’m curious, as Solana is a hub for a large amount of global business, especially financial activities, how can it accelerate or complement the goals you just mentioned?
Anatoly Yakovenko: The Solana system is not designed as a store of value. In fact, it has a low tolerance for network failures and relies on as many available resources on the internet as possible. In practice, it depends on the majority of free cross-border communication and finance in the world. It is different from a token that can be used as an emergency refuge (bunker coin). Of course, I think there is also a need for a bunker coin that can survive when there are local geopolitical conflicts.
However, optimistically speaking, the connections between things in the world are becoming increasingly close. I believe we will see our multi-terabit connections. In that world, you will have a fully interconnected world. I think this globally synchronized state machine can absorb a lot of execution-related content.
From experience, settlement can happen in many places because settlement is easy to guarantee. I want to emphasize again that I am taking this position for the sake of discussion. Since 2017, we have witnessed hundreds of privacy networks with many different instances from a design perspective. Basically, we have not seen any voting algorithm (Quorum) failures because settlement is relatively easy to achieve. Once you have established a complex Byzantine fault-tolerant mechanism among 21 decentralized participants, you will not see settlement failures. We have also solved other extension issues. From experience, Tendermint is very feasible, although we experienced the Luna crash in the early stages, the problem did not lie in the voting algorithm mechanism.
I think we are spending too much on settlement in terms of security, resources, and engineering, while the expenditure on research and execution is far from enough, and this is where most of the profitability of the financial industry lies. Personally, I believe that if these technologies are to truly impact and reach a global scale, they must be superior to traditional finance in terms of price, fairness, speed, etc. This is where we need to focus on research and competition.
a16z crypto: You think settlement is one of the aspects you choose to optimize blockchain. People may over-optimize blockchain for settlement and neglect other aspects such as throughput, latency, and composability, which are often in relative opposition to the security of settlement. Can you talk about Solana’s architecture?
Anatoly Yakovenko: The task of Solana’s architecture is to simultaneously transmit information from all over the world to all participants in the network at the fastest speed possible. So there is no need for sharding or complex consensus protocols. We actually want to keep things simple. Or you could say we were lucky enough to solve a computer science problem, which is box synchronization (using verifiable delay functions as a time source in the network). You can imagine it as two radio towers transmitting signals at the same time or frequency, causing interference. The first protocol people thought of when they started building cellular networks was to equip each tower with a clock and let them alternate transmitting signals according to time.
One metaphor is that the Federal Communications Commission (FCC) is like a truck full of villains. If your signal tower is not synchronized with the open permission list network, they will drive to your signal tower and shut it down. Solana is inspired to use verifiable delay functions to schedule block producers, so collisions do not occur. For example, in a network like Bitcoin, if two block producers produce a block at the same time, a fork will occur, similar to noise in a cellular network. If we can force all block producers to produce in time intervals, you can get a good time division protocol where each block producer can produce in rotation according to the schedule, and they will never collide. Therefore, forks will never occur, and the network will never enter a noisy state.
After that, everything we do is operating system and database optimization. We transmit data blocks globally like a Bitcoin flood, transferring coding blocks to different machines. In fact, they eventually look very similar to data availability sampling and have the same effect. Then they forward bits to each other, reconstruct blocks, and then vote, and so on. The main design idea of Solana is that we strive to ensure that every process of the network or code library only needs to update the kernel for scalability.
If within two years, we can get twice the kernel for every dollar we spend, we can adjust it so that the number of threads per block is twice as much as before, or the computational workload per block is twice as much as before. Therefore, the network can achieve twice as much. All of this will happen naturally without any changes to the architecture.
This is the main goal we really want to achieve, based on my experience. From 2003 to 2014, I worked at Qualcomm. We could see the improvement in mobile terminal hardware and architecture every year. If you don’t consider the possibility of expanding software without rewriting it the following year, then as an engineer, you are not qualified. Because your devices will scale rapidly, and to take advantage of this, you have to rewrite the code.
So, if you really need to think ahead, everything you build will only develop faster and faster. The biggest learning experience in my engineering career is that you can choose a carefully designed algorithm, but it may be wrong because the benefits of using this algorithm become minimal as the hardware scales up, and implementing it now is like wasting time. So, if you can do very simple things and only need to expand the kernel, you may already be able to achieve 95% of what you want.
Solana’s Building Philosophy
a16z crypto: Using proof of history as a way to synchronize time across validators is a very innovative idea, which is why Solana is different from other consensus protocols.
Anatoly Yakovenko: This is a part of Amdahl’s Law, which is why it is difficult for people to replicate Solana in terms of account-less, latency, and throughput, because the classic consensus implementation is based on step functions. The entire network, such as Tendermint, must reach consensus on the content of the current block before moving on to the next block.
The cell signal tower uses a timetable, you just need to send a signal. Because there is no need to use step functions, the network can run quickly. I think it’s like a kind of synchronization, but I don’t know if this word is appropriate. They continuously transmit without stopping to wait for consensus to run. The reason we can do this is because we have a strict understanding of time. To be honest, we can establish some clock synchronization protocols for redundancy, but the process may be very difficult. This is a huge project that requires reliable clock synchronization.
This is the concept of Solana. Before I started building Solana, I liked trading, being a broker, and so on, although I didn’t make money. At that time, “flash boys” were prevalent in traditional finance. Whenever I thought my algorithm was good enough, my order would be delayed a bit, it would take longer for the order to enter the market, and the data would also come slower.
I think if we want to disrupt the financial industry, the basic goal of these open business systems is to never let this happen. This system is open, and anyone can participate. Everyone knows how to access it and how to obtain rights, such as priority or fair rights.
Within the limits allowed by physics and achievable by engineers, the fundamental problem is to achieve all this as fast as possible. If blockchain can solve this problem, it will have a significant impact on other parts of the world, benefiting many people globally. This could become a cornerstone, and then you can use it to disrupt advertising transactions and monetization models on the Internet, and so on.
a16z crypto: I think there is an important distinction between pure latency and malicious activity, especially in a single state machine. Perhaps you can explain in detail which one you think is more important and why.
Anatoly Yakovenko: It is impossible to atomize the entire state, as this would mean that the entire state has only one correct global lock, which would result in a very slow sorting system. Therefore, you need atomic access to the state, and this needs to be guaranteed. It is difficult to build software that operates on non-atomic remote states if you don’t know what side effects it will have on your computation. So, the idea is like submitting a transaction, either it executes completely or fails completely, without any side effects. This is one of the characteristics that these computers must have. Otherwise, I think it is impossible to write reliable software for them. You simply cannot build any reliable logic or financially reliable logic.
You may be able to build a system that maintains consistency, but I think that is another type of software. So, there is always a tense relationship between maintaining system atomicity and performance. Because if you guarantee this, it ultimately means that you have to choose a specific writer globally at any given time to handle a specific part of the state. To solve this problem, you need a single sequencer to linearize these events. This creates points where value can be extracted and system fairness can be improved. I think solving these problems is indeed difficult, not only Solana faces these challenges, Ethereum and Lightning Robots also face these problems.
Solana and Ethereum
a16z crypto: One of the frequently debated issues, especially in the Ethereum community, is the verifiability of execution, which is very important to users because they don’t have very powerful machines to verify activities in the network. What is your opinion?
Anatoly Yakovenko: I think the ultimate goals of these two systems are very similar. If you look at the Ethereum roadmap, you will find that its idea is that the overall network bandwidth is greater than any individual node, and the network is already processing or computing more events than any single individual node. You have to take into account the security factors of such a system. There are also protocols for publishing fraud proofs, sampling schemes, and so on, all of which actually apply to Solana as well.
So, if you step back and look, there is actually no difference. You have a system like a black box that creates so much bandwidth, which is not practical for a random user. Therefore, they need to rely on sampling techniques to ensure the authenticity of the data. Just like a very powerful rumor network that can spread fraud proofs to all clients. The things guaranteed between Solana and Ethereum are the same. I think the main difference between the two is that Ethereum is largely constrained by its narrative as a global currency, especially its narrative competing with Bitcoin as a store of value.
I think it makes sense to allow users to have very small nodes. Even if they only partially participate in the network, rather than having the network entirely run by professionals. To be honest, I think this is a fair optimization, for example, if you don’t care about execution and only care about settlement, why not lower the node requirements and allow people to partially participate in network activities? I don’t think doing this can create a trust-minimized or absolutely secure system for the majority of people in the world, and people still have to rely on data availability sampling and fraud proofs. And users only need to execute the signatures of the majority on the blockchain to verify if something wrong has been done.
On Solana, a single transaction describes the action state fragments of all parties involved in the transaction, and it can be easily executed by a majority of people’s signatures on any device, such as a browser in a mobile phone, because everything on Solana is predetermined, so it is actually easier to build on Solana. Like EVM or any smart contract, they can access any state and randomly jump between them during execution. In some ways, it is almost simpler. But I think, from a high-level perspective, users still have to rely on data availability sampling and fraud proofs. In this respect, all designs are the same.
a16z crypto: I think the difference between the two lies in zero-knowledge proofs and validity proofs, especially fraud proofs. You seem to think that zkEVM is almost impossible to be audited and it will not develop in the next few years. I want to ask you, why didn’t Solana prioritize zero-knowledge proofs and validity proofs like Ethereum?
Anatoly Yakovenko: I think there are two challenges here, one is how we prioritize them because there is a company called “white protocol” building zero-knowledge proofs for applications. The proofs are fast. Users won’t notice them when interacting with the chain.
In fact, you can combine them. You can have a Solana transaction call five different zk programs. So, this environment can save computation resources for users or create privacy for users, but it doesn’t really validate the entire chain. The reason I think it’s difficult to validate the entire chain is that zero-knowledge systems don’t handle a large number of sequential state dependencies well, with the most typical example being VDF (Verifiable Delay Function). When you try to prove a sequential SHA, recursive SHA is 56, you will find that it crashes because the sorting state dependency during execution greatly increases the constraints the system must have. And verification takes a long time, I don’t know if this is the best result in the industry, the latest result I saw on Twitter is that a 256-byte SHA takes about 60 milliseconds. This is a long time for a click instruction.
So sorting computation, classical computation is necessary. And in an environment designed for execution, there are a lot of markets, and you actually have a lot of sequential dependencies. The market is very hot. Everyone submits data directly to a pair of transactions, and everything around this pair of transactions depends on this pair of transactions. So, like execution, this kind of sequential dependency is actually very large, which will result in a very long proof system.
Solana does not prohibit someone from running a zero-knowledge prover in a recursive light way to prove the entire computation if it is feasible. But what users need is that when they make transactions, their information is quickly written into the chain, and the writing is in microseconds or milliseconds, and they need quick access to the state and some guarantees on the state. This is the key to gaining benefits.
Therefore, I think we need to solve this problem, which requires the practical competitiveness of traditional finance. If this can be achieved, then we can start to study zero-knowledge and figure out how we can provide these guarantees for users who do not want to verify the chain, do not want to rely on these events, but maybe we can do it once every 24 hours or something similar. I think there are two different use cases, first, we must truly solve the market mechanism problem, and then for other long-tail users.
a16z crypto: It sounds like what you mean is that validity proofs and ZK proofs are excellent in settlement, but not helpful in execution because the latency is too long and their performance needs improvement.
Anatoly Yakovenko: So far it’s all real. This is my intuition, and the reason is simple, the more active the chain, the more hotspots it depends on. They are not completely parallel and will never interact with each other. It’s just a bunch of low-quality code.
a16z crypto: Another counter-argument may be that zero-knowledge proofs are making exponential progress because there is a lot of investment in this area. Maybe in 5 years, 10 years, the cost may decrease from the current 1000 times to a more feasible level. You have a background in hardware engineering, and I would love to hear your thoughts on whether it is more efficient to have a node perform calculations and generate proofs, and then distribute the proofs to others, rather than having each node perform calculations on its own.
Anatoly Yakovenko: This trend is helpful for optimizing zero-knowledge systems. More and more things are happening on the chain. The number of constraints will increase, and the speed will far exceed the speed at which you add hardware, and then you continue to add hardware. This is my intuition. My feeling is that as the demand increases, such as the increasing computational load on the chain, it will become increasingly difficult for zero-knowledge systems to keep up with low latency. I’m not even sure if it will be 100% feasible. I think it is likely that you can build a system that can handle very large recursive batches, but you still have to run classic execution and take snapshots every second. Then, you invest an hour of computing time in a large parallel farm, verify between each snapshot, and start recalculating from there, but this takes time, and I think it is a challenge.
I’m not sure if ZK can catch up unless the demand stabilizes, but I think eventually the demand will level off. Assuming hardware continues to improve, at some point the demand for cryptocurrency will saturate, just like the search volume per second on Google may have already saturated. Then you will start to see this happening. I think we are still far from that goal.
a16z crypto: Another major difference between these two models is the Ethereum-centric worldview of Rollups, which essentially is a pattern of computing shards, data availability shards, bandwidth, and network activity shards. Therefore, it is imaginable that larger throughput can be achieved in the end, as you can almost infinitely increase Rollups on the basis of a single Rollup, but this means compromising on latency. So, what is more important? The overall throughput of the network or access latency? Perhaps both are important?
Anatoly Yakovenko: I think the main problem is that you have Rollups and sequencers, and people will extract value from the construction of sequencers and Rollups, and in this system, you more or less have some common sequencers. Their operations are no different from Citadel, Jump, brokers, traders, etc. They route orders. These systems already exist. This design does not actually break the entire monopoly. I think the best way is to build a completely permissionless commercial system that prevents those intermediaries from truly participating and starts capturing the value of a globally synchronized state machine.
It is very likely that its actual usage cost will be lower because it’s like creating a bunch of different small channels (pipes).
In general, the pricing of any given channel is based on the remaining capacity of that channel, not the overall network capacity. It is difficult to establish a system that fully shares network bandwidth. You can try to place blocks anywhere like in Rollup design, but they will all participate in competition and bidding. It is not as simple as a huge pipeline, and the price is based on the remaining capacity of this chain-like pipeline. Because it is a bandwidth aggregation source, its pricing will be lower, but the final speed and performance will be higher.
Block Space and the Future
a16z crypto: I once heard you say that you don’t think the demand for block space is infinite. Do you think that when web3 achieves mainstream adoption, the demand for block space by blockchain will reach a balance point?
Anatoly Yakovenko: Just imagine if engineers at Qualcomm were told that people have an infinite demand for cellular bandwidth and the code is designed for infinity, that would be absurd.
In reality, a goal would be set to design for such demand, such as considering how much hardware is needed, when to launch, what is the simplest implementation, and what is the deployment cost, etc. My intuition is that 99.999% of the most valuable transactions may only require less than 100,000 TPS, that’s my intuitive guess. And it is actually quite feasible to achieve a system with 100,000 TPS, the current hardware can do it, Solana’s hardware can do it. I think a speed of 100,000 TPS may be the block space for the next 20 years.
a16z crypto: Could it be because block space is so affordable that people want to use it for various purposes, so the demand for it soars?
Anatoly Yakovenko: But there is still a bottom price. The price of purchase must cover the bandwidth cost of each validator. Just like egress costs dominate validation costs. If you have 10,000 nodes, you probably need to price the network’s per byte usage as 10,000 times the normal egress cost, but that sounds expensive.
a16z crypto: So I guess my question is, do you think there will be a point where Solana reaches its limit, or do you think the monolithic architecture is already sufficient?
Anatoly Yakovenko: So far, people have been doing sharding because they have built systems with much lower bandwidth than Solana, so they encountered capacity limitations and started bidding for bandwidth, which has far exceeded egress costs. Taking the egress cost of 10,000 nodes as an example, last time I checked, the cost per megabyte of egress for Solana’s validators should be $1, which is a bottom price, you can’t use it to play videos. But its price is very low, you can use it to search, you can basically put every search on-chain and get results from your search engine.
a16z crypto: I think this is actually an interesting point because we posed the question “What is the ultimate goal of blockchain expansion?” at the beginning of the podcast, which means that scalability is the most important issue for blockchain.
Chris has also used a similar analogy before, where the significant progress in artificial intelligence over the past decade is largely attributed to better hardware, which is the real key. So I think when we talk about the scalability of blockchain, it is also for the same purpose. If we can achieve a significant increase in TPS (transactions per second), then everything will function normally. But an interesting counterpoint is that Ethereum can only complete 12 transactions per second, and the throughput of Ethereum itself is still higher than any individual L2 (Layer 2) with relatively high fees. On Solana, many simple transfer transactions have low costs. When we discuss this issue, we usually come to the conclusion that if our throughput reaches the next level, there will be many new applications that we cannot reason or think about now. To some extent, Solana has been a place for building applications in the past few years, and many things are quite similar to what is built on the basis of Ethereum.
Do you think higher throughput or lower latency will unleash many new applications? Or will most of the things built on the blockchain in the next 10 years be very similar to the designs we have already proposed?
Anatoly Yakovenko: Actually, I think most applications will be very similar. The most difficult thing to crack is how to establish a business model, such as how to apply these new tools? I think we have already discovered these tools.
The reason why Ethereum transactions are so expensive is that its state is very valuable. When you have this state, anyone can write to it, and they will establish economic opportunity costs to be the first to write to this state, which effectively increases the fees. This is the reason why valuable transaction fees are generated on Ethereum. To achieve this, many applications need to create this valuable state, make people willing to continuously write to it, and make people start competing to raise fees.
a16z crypto: Here I present a counter-opinion. I think we easily underestimate the creativity of developers and entrepreneurs in the entire field. In fact, if you look back at history, such as the first wave of networks and the internet boom that started in the 1990s, it took us a long time to truly develop the main driving force for interesting applications. And as an example, from Ethereum around 2014, we truly had programmable blockchains, and things like Solana have only truly existed for about 4 years. The time people have spent exploring and designing is actually not long.
The fact is that the number of developers in this field is still very small. For example, we have tens of thousands of developers who know how to write smart contracts and truly understand the prospects of blockchain as a computer. Therefore, I think it is still early to develop interesting ideas on the blockchain. The design space it creates is so vast that I guess we will be amazed by what people will create in the future. They may not only be related to transactions, markets, or finance. They may appear in the form of shared data structures that are very valuable but fundamentally unrelated to finance.
Decentralized social networks are a good example, where the social graph is placed on the chain as a public product, allowing entrepreneurs and developers to build upon it. Since the social graph is on the blockchain and open for all developers to access, it becomes a valuable state maintained by the blockchain. You can imagine people wanting to publish a large number of transactions for various reasons, such as real-time updates to this data structure. If these transactions are cheap enough, developers will find ways to leverage them.
Historically, whenever computer speed increases, developers find ways to use the additional computing power to improve their applications. Our computing power is never enough. People always want more computing power, and I believe the same will happen with blockchain computers. And there won’t be a limit, maybe not an infinite limit, but I think the upper limit for the demand for block space is much higher than what we can imagine.
Anatoly Yakovenko: But on the other hand, use cases for the internet were discovered early on, such as search, social graphs, and e-commerce, back in the 90s.
a16z crypto: There are some things that are hard to predict. For example, shared bikes were hard to predict. In fact, the form that search eventually took was also hard to predict. I use things like streaming videos extensively in social networks, which was unimaginable at the beginning.
I think, just like here, we can think of some applications that people might build on the blockchain. But given the current limitations and limitations of infrastructure, some of these applications feel impossible to imagine. Once these limitations are lifted, once more people enter this field to build, we can speculate, and there may be many heavyweight applications in the future. So if we let it develop freely, we may be surprised at how powerful it becomes.
Anatoly Yakovenko: There is an interesting card game called “dot bomb,” where the goal is to lose money as slowly as possible. You can’t actually win or make money. You manage a group of different startups using internet ideas from the 90s. Without exception, every so-called bad idea, like online grocery delivery and online pet stores, became at least a billion-dollar business at some point after 2010. So, I think many ideas may be initially bad or fail in their initial implementation but eventually find good adoption in the future.
The Future Adoption of Blockchain
a16z crypto: So, the question is, what do you think is the key for blockchain to go from its current applications to becoming mainstream on the internet? If it’s not scalability, then what are the other hindering factors, such as cultural acceptance of blockchain? Is it privacy issues? User experience?
Anatoly Yakovenko: This reminds me of the development history of the Internet. I remember how the whole experience changed. After I went to college, I got an email address. Every working person had an email address. I started receiving links with various content, and then the user experience on the Internet improved. For example, Hotmail was born and Facebook also developed.
Because of this, people’s thinking changed. They understood what the Internet was. At first, people even had difficulty understanding what a URL was. What does it mean to click on something? What does it mean to enter a server? We have the same problem in self-regulation. We need to make people truly understand these concepts, such as what mnemonic words mean. What do wallets and transactions mean? People’s thinking needs to change, and this change is slowly happening. I think that once every user who eventually buys cryptocurrency and deposits it in their self-regulating wallet has this experience, they will understand this point. However, so far, not many people have had this experience.
a16z crypto: You have created a mobile phone. Maybe you can tell us where the inspiration for making a mobile phone came from and how you think the current promotion situation is?
Anatoly Yakovenko: My experience at Qualcomm made me realize that this is a problem that exists with limitations. We can solve it, and it won’t make the whole company turn to the mobile phone business. So for us, this is a low-cost opportunity that may change the cryptocurrency or mobile industry.
It’s something worth doing. We collaborated with a company to manufacture a device, and when we collaborated with them to launch specific cryptocurrency functionality, we received great feedback from people and developers, thinking that it is like an alternative to the app store. However, everything is unknown. For example, whether the application of cryptocurrency under macro conditions is so eye-catching that people are willing to switch from iOS to Android? Some people are willing, but not many. Launching a device is very difficult. Basically, every device launched outside of Samsung and Apple has ended in failure. The reason behind this is that the production lines of Samsung and Apple have been well optimized, and any new startup company is far behind these giants in terms of hardware.
Therefore, you need to have some “religious” reasons to convince people to change, and perhaps cryptocurrency is that reason. We haven’t proven this yet, but we haven’t disproven it either. Just like we haven’t seen a breakthrough use case where self-regulation is the key function people need and they are willing to change their behavior.
a16z crypto: You are one of the few founders who can build both hardware and decentralized networks. Decentralized protocols or networks are often compared to building hardware because they are very complex. Do you think this metaphor is valid?
Anatoly Yakovenko: Like my previous work at Qualcomm. If there is a hardware problem, it can cause a lot of issues. For example, if a tape drive fails, the company would have to spend millions of dollars every day to fix it, which could be catastrophic. But in a software company, you can still quickly identify problems and patch the software 24/7, which makes it easier.
Community and Development
a16z crypto: Solana has done an excellent job in building its community, with a very strong community. I’m curious, what methods did you take in building the company and the ecosystem?
Anatoly Yakovenko: It can be said that luck played a part in it. We started as Solana Labs in 2018, which was towards the end of the previous cycle. Many of our competitors actually raised several times more funding than us. At that time, our team was very small. We didn’t have enough funds to build and optimize CDM, so we built a runtime that we believed could demonstrate this key feature – a scalable and unrestricted blockchain that is not affected by the number of nodes or significant latency. We really wanted to make breakthroughs in these three aspects.
At that time, we only focused on building this fast network, without paying much attention to other aspects. In fact, when the network was launched, we only had a very rudimentary explorer and command-line wallet, but the network speed was very fast. This was also the key attraction for developers because there were no other fast and cheap networks available as alternatives, and there were no programmable networks that could offer this speed, latency, and throughput.
This is actually why developers were able to thrive. Because at that time, many people couldn’t copy and paste Solidity code, so it was like starting from scratch. The process of building from zero is actually the entry process for engineers. For example, if you can build the primitives you are familiar with in Stack A and Stack B, you can learn Stack B from start to finish. If you can accept certain trade-offs, you may become its advocate.
If we had more funding, we might have made a mistake by trying to build EVM compatibility. But in fact, our engineering time was limited, which forced us to prioritize the most important thing, which is the performance of this state machine.
My intuition is that if we can remove the limitations on developers and give them a very large, very fast, and low-cost network, they can unleash their potential. And this has indeed happened, which is amazing and admirable. I’m not sure if we would have been successful if the timing wasn’t right, for example, if the macro environment wasn’t right at that time. We announced it on March 12th, and then on March 16th, the stock market and the cryptocurrency market both crashed by 70%. I think those three days of timing may have saved us.
a16z crypto: Another important factor here is how to win developers?
Anatoly Yakovenko: It’s a bit counterintuitive, but you have to chew glass to build your first program. It requires people to truly invest their time, and we call it “chew glass”.
Not everyone will do it, but once enough people do, they will build libraries and tools to make it easier for the next developer. For developers, it’s actually something to be proud of, and naturally, libraries will be built, and software will naturally expand. I think this is what we really want the developer community to build and chew on because it really gives those people ownership of it, really makes them feel like they own the ecosystem. We try to solve problems they can’t solve, like long-term protocol issues.
I think that’s where this spirit comes from. You’re willing to chew glass because you get rewarded, you get ownership of the ecosystem. We can focus on making the protocol cheaper, faster, and more reliable.
a16z crypto: What are your thoughts on the developer experience and the role programming languages will play in the field as they gain more mainstream adoption? It’s quite difficult to integrate into this field, learn how to use these tools, and learn how to think.
In the new paradigm, programming languages may play an important role in this aspect because the security of smart contracts has become an important task for engineers in this field. The risks involved are significant. Ideally, we will eventually see a world where programming languages provide much more help through tools than they do now, such as formal verification, compilers, and automation tools that allow you to determine if your code is correct.
Anatoly Yakovenko: I think formal verification is necessary for all DeFi applications. Many innovations happen here, such as creating new markets, which are the places with the greatest hacker threats and the places that really need formal verification and similar tools.
I think there are many other applications that are quickly converging on single-node implementations and becoming trusted in terms of effectiveness. Once you can establish a single standard for a certain class of problems, it’s much easier than a new startup building a new DeFi protocol because no one has written this kind of code before, so you have to bear a lot of implementation risk, convince people to trust it, and take risks by putting money into this protocol. That’s where you need all the tools. Formal verification, compilers, the Move language, and so on.
a16z crypto: The programming world is undergoing an interesting change because in the past, most programming was traditional imperative programming, similar to JavaScript. And when you write some code, it’s likely to be incorrect and get broken, and then you fix it.
However, more and more applications are critical to tasks, and for these applications, you need a completely different programming approach that better guarantees the correctness of the code you write. On the other hand, there is another type of programming emerging, which is machine learning, such as using data to synthesize programs. And both of these things are devouring the original form of imperative programming. There will be fewer ordinary JavaScript codes in the world. There will be more codes written by machine learning algorithms based on data. More code will be written using more formal techniques that look more like math and formal verification.
Anatoly Yakovenko: Yes, I can even imagine that at some point, the verifier optimizes the smart contract language and then tells LLM to translate it into Solidity or other Solana anchors. Two years ago, people might not have believed it, but with Gpt 4, there have been many significant advancements.
a16z crypto: I like this idea. You can use an LLM to generate program specifications that meet the requirements of certain formal verification tools. Then, you can ask the same LLM to generate the program itself. Then, you can run the formal verification tool in the program to see if it really meets the specification. If it doesn’t, it will give you an error, which you can feed back to other LLMs to try again. You can keep doing this until you generate a verifiable, formally verified program.
Ecosystem and Talent Recruitment
a16z crypto: We are discussing how to build a strong ecosystem. Many blockchains decentralize almost immediately after their launch, to the point where the core team is no longer involved in forum discussions or trying to help other partners participate. However, it seems that you have been very involved from the start, which I think could be a major advantage in building the Solana ecosystem.
Anatoly Yakovenko: To quote a saying, decentralization is not about the absence of leadership, but about diverse leadership. I still remember how difficult it was to take Linux seriously in a company like Qualcomm, to the point where even the idea of running Linux on mobile devices seemed laughable. When I joined, the whole community was trying hard to convince everyone that open source was meaningful, and I think that’s what we need to do. The network needs to be decentralized.
But that doesn’t mean there is no leadership. In fact, you need a lot of experts constantly telling people about the benefits of using this specific network and its architecture, constantly getting more people to join and cultivating more leaders who can spread knowledge and help others around the world. But that doesn’t mean everything happens under one roof. If the network and code are open, anyone can contribute and run it. Naturally, it becomes decentralized. You will naturally see leadership emerge from unexpected places.
Our goal is to develop everything around us, to make our voice one among many, rather than silencing others. We pay a lot of attention to hackathon fans and try to connect them with each other, to involve them in this cycle. It’s like a flywheel. We try to connect people with developers from all over the world, have as many one-on-one interactions as possible, and then get them to participate in hackathons, compete, and build their first or second products.
Among cryptocurrency users, only a few products can enter the market, obtain venture capital, and have a scalable user base. In my opinion, this means that we don’t have enough creativity. We don’t have enough founders aiming for targets and finding truly scalable business models. Therefore, we need a large number of companies to compete and see if they can come up with brilliant ideas. This is the biggest challenge.
a16z crypto: One related question is how to involve the community in developing parts of the core protocol itself. This is one of the trickiest balancing issues for any blockchain ecosystem. On one hand, you can have active community involvement, but on the other hand, it may reduce your flexibility. Additionally, governance processes involve more people, making coordination difficult. On the other hand, you can control things in a more top-down manner and develop faster as a result. But in terms of community involvement, you will be influenced in some way. How do you strike a balance?
Anatoly Yakovenko: In general, when I was working at the foundation, we saw people actively contributing to what they wanted to do. Then they go through a proposal process, and there may be a grant or something else attached to it. It’s similar to an interview process, like when I hire someone for the lab, it may be that the person doesn’t fit the culture or for some other reason, but it doesn’t mean the person is bad, just that something didn’t work out. Similarly, you find engineers have been submitting code and contributing to the codebase. They already know how to culturally merge code and deal with open-source directions. When you find people who can solve problems themselves, you can provide funding, which is very important to ensure that you can find truly excellent people who can submit code and are willing to work on it long-term.
a16z crypto: What do you think is the best way to run decentralized governance protocols today?
Anatoly Yakovenko: Like L1, the approach we take seems to be effective, like Linux, moving forward and trying to avoid vetoes from any participant as much as possible. It takes the path of least veto. To be honest, there are many participants who can veto any change, they may think the change is not good or should not be made. But we must make the system faster, more reliable, and use less memory, and nobody will oppose these changes.
Ideally, we have a related process where you publish a design and everyone spends three months discussing it. So, before merging, everyone has plenty of opportunity to look at the code and decide whether it is good or bad. This process may seem a bit lengthy, but it actually isn’t. If you’ve worked in a large company, like Google or Qualcomm, you know you have to talk to a lot of people, push it, make sure all key stakeholders, like key people who touch the codebase, can accept it, and then slowly get it done. It is more difficult to make radical reforms. Because many smart people are looking at the same thing, they may find real errors and then make the final decision.
a16z crypto: How do you consider talent recruitment?
Anatoly Yakovenko: In terms of engineering, our requirements are often high, and we usually hire quite experienced personnel. My approach to recruitment is that in the early stages, I will put effort into something so that I know how it should be done, and then I will tell the new employees how I did it. I don’t expect them to complete it within 90 days or surpass me. I can evaluate them during the interview and tell them about the problem I am solving. I need someone to take over so that I can focus on unknown tasks. In a startup company, if you are the CEO, it’s best not to give others unknown problems because you don’t know if they can solve them.
When the ecosystem develops to a certain extent, a PM is needed. At that time, I spent too much time answering questions, and I was still answering questions until 2 a.m. I thought, someone else should do this, and now I know what this job is all about.
a16z crypto: How important do you think privacy will be for blockchain in the future?
Anatoly Yakovenko: I think the whole industry will undergo a transformation. First, some visionary people will focus on privacy, and then suddenly, large payment companies or other companies will adopt this technology, and it will become the standard. I think it needs to become a feature – if you don’t have this feature, you cannot compete. We have not reached a mature market yet, but I think we will. Once many people use blockchain, every merchant in the world will need to guarantee privacy. This is just the minimum requirement.
a16z crypto: What impact does the Solana architecture have on MEV? Does the leader have too much authority to reorder transactions?
Anatoly Yakovenko: Our initial idea was to have more than one leader assigned to each slot. If we can get as close to the speed of light as possible, which is about 120 milliseconds, then you can have a discrete batch time auction every 120 milliseconds globally. Users can choose the most recent or the one with the highest discount from all available block producers. In theory, this could be the most efficient way to operate financially – either I choose delay and send it to the nearest block producer, or I choose the highest discount and delay the dollar transaction. This is a theory that we have not tested with multiple leaders per slot, but we are getting closer to this goal and I think it may be achievable, perhaps next year.
I think once we achieve this solution, we will have a very powerful system that essentially forces competition and minimizes MEV.
a16z crypto: What is your favorite system optimization in the Solana architecture?
Anatoly Yakovenko: I like the way we propagate blocks, which was one of our early ideas and one of the things we really needed to do. We can scale the number of nodes in the system in the network, and we can transmit a large amount of data, but each node has a fixed and limited export capacity, which is the export load it has to bear.
If you consider it from a higher level, every leader, when creating a block, will slice it into threads and create encodings for these fragments. Then, they transmit the fragments to a node, which then sends them to other nodes in the network. Because all the data is mixed with the encodings, as long as someone receives this data, the reliability of the data is very high, as the number of nodes propagating the data is very large, unless 50% of the nodes fail, which is highly unlikely. So, this is a very cool optimization, and it has very low cost and high performance.
a16z crypto: How do you view the future application development of cryptocurrencies? How will users who are not familiar with blockchain adopt it in the future?
Anatoly Yakovenko: I think we have some breakthrough applications and payment methods, as using cryptocurrencies for payments has clear advantages compared to traditional systems. I believe that once regulations are in place and Congress passes a few bills, payments will become a breakthrough use case. Once we have a means of payment, I think the other aspect will also develop, such as social applications, which can be messaging apps or social graph apps. These apps are currently growing slowly. I think they are in the golden age of takeoff and will reach truly significant numbers.
Once we reach mainstream adoption, it is possible to iterate, understand what people really want, and provide them with these products. People should use products for their utility, not for the tokens.
a16z crypto: Do you have any advice for builders in this field or builders outside of this field? Or any advice for those who are curious about cryptocurrencies and Web3?
Anatoly Yakovenko: What I want to say is that now is the best time. The current market is relatively quiet on a macro level, with not much noise, so you can focus on the fit between your product and the market. When the market turns around, these discoveries will greatly accelerate your development. If you want to work in the field of artificial intelligence, you should not be afraid to start an AI company or a cryptocurrency company or any other company now. You should try and build these ideas.
But what I want to say is that people should try to create greater ideas instead of repeating what already exists. The best metaphor I’ve heard is that when people discovered cement, everyone focused on using cement to build bricks, and then someone thought, maybe I can build skyscrapers. They figured out a way to combine steel and architecture, which no one could have thought of. The new tool is cement, you just need to figure out what a skyscraper is and then go build it.
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- ABCDE Why We Invest in GasZero
- Did airdrops ruin cryptocurrencies?
- Has the long-standing resentment towards VC finally erupted? After falling out with LianGuairadigm, Reflexer bought back tokens and put on a mocking face.
- US CFTC Commissioner Proposing the Establishment of Fraud Database to Identify Bad Actors
- Intent The Starting Point of Web3 Interactive Intelligence
- Principle, Current Application Status, and Risk Response of Intent
- SEC Pierce The US government needs to remember who they serve.