a16z Interviews Solana Co-founder People should strive to create greater ideas rather than repeat what already exists.
a16z Interviews Solana Co-founder Pursue originality, not replication.Anatoly Yakovenko mentioned in an interview that because Solana had limited scale and funding at the beginning, they could only consider the goal of “fastest” when designing the network, without being able to take other aspects into account. If they had more money at the time, they would have made the mistake of wanting to build an EVM network. However, it turned out that the market and developers at the time were in desperate need of a fast and low-cost network, which laid the foundation for Solana to achieve such a powerful developer community today.
Original title: Debating Blockchain Architectures (with Solana) Hosts: a16z crypto General Partner Ali Yahya, a16z crypto Trading Team Partner Guy Wuollet Guests: Solana Labs CEO, Solana Co-founder Anatoly Yakovenko Translation: Qianwen, ChainCatcher
“But what I want to say is that people should try to create greater ideas instead of repeating what already exists. The best analogy I’ve heard is that when people discovered cement, everyone focused on using cement to build bricks, and then someone thought, I can build a skyscraper. They figured out a way to combine steel, cement, and architecture, which no one could have imagined. The new tool is cement. You just need to figure out what a skyscraper is and then build it.”
In this episode, a16z crypto talks to Anatoly Yakovenko, co-founder and CEO of Solana Labs. Anatoly Yakovenko previously worked at Qualcomm as a senior engineer and engineering manager.
- An In-depth Analysis of the Bitcoin DA Adapter Solutions Chainway and Kasar Labs Does Bitcoin Need ZK Rollup?
- Behind the Explosive Popularity of TG Bots Intention Trading is Opening the Door to Web3 Interaction Intelligence
- Exploring the Significance of Web3’s Public Goods and Ownership for the Future of the Internet
Summary
-
The ultimate goal of decentralized computing
-
The idea behind Solana
-
Differences and similarities between Solana and Ethereum
-
The future development of blockchain
-
Web3 community and development
-
Recruiting talent for Web3 startups
The ultimate goal of decentralized computing
a16z crypto: First of all, I want to know how you view the ultimate goal of decentralized computing? How do you view blockchain architecture?
Anatoly Yakovenko: My position is quite extreme. I think settlement will become less and less important, just like in traditional finance. You still need someone to provide guarantees, but these guarantees can be achieved in many different ways. I believe that what is truly valuable to the world is having a globally distributed, globally synchronized state, and this is the real challenge. You can think of it as the role of Google SlainGuainner to Google, or Nasdaq to the financial market.
From a macro perspective, blockchain systems are permissionless, programmable, and highly open, but there is still some kind of market on the back of the stack. For all of these markets, achieving global and fully synchronized states as close to the speed of light as possible is very valuable because everyone can use it as a reference. You can still operate in local markets, but if there is fast and synchronized global pricing, the global financial system will become more efficient. I believe this is the ultimate goal of blockchain, to synchronize as many states as possible at the speed of light.
a16z crypto: If cryptocurrencies and blockchain achieve mainstream adoption, what will be the biggest driving force on the blockchain?
Anatoly Yakovenko: I think the form will still be similar to Web2, but it will be more transparent, realizing the vision of a long-tail distribution – there will be various small-scale companies on the Internet that can control their own data, instead of a few dominant players (although what these large companies are doing is also great). I think, in the long run, creators should have more control and more autonomy in publishing, in order to achieve a truly meaningful Internet with wide distribution and market.
a16z crypto: Another perspective to consider or ask this question is how to weigh the options. You said you think settlement will become less important in the future. I’m curious, as Solana is a hub for a large amount of global business, especially financial activities, how can it accelerate the achievement of the ultimate goal you just mentioned, or complement it?
Anatoly Yakovenko: The Solana system is not designed as a store of value, in fact, it has a low tolerance for network failures, and it is designed to use all available resources on the Internet as quickly as possible. In fact, it depends on most of the free cross-border communications and finance in the world. It is different from a token that can be used for emergency shelter (bunker coin), and of course, I think there is also a need for bunker coins that can survive during local geopolitical conflicts.
From an optimistic point of view, the connections between things in the world are becoming increasingly close. I think we will see the interconnection of tens of gigabits between us. In that world, you will have a fully interconnected world. I think this globally synchronous state machine can absorb a lot of execution aspects.
From experience, settlement can happen in many places because settlement is easy to guarantee. Again, I emphasize that I take this position for discussion. Since 2017, we have witnessed hundreds of different types of privacy networks, many of which have different designs. We basically see no voting algorithm (Quorum) failures because settlement is relatively easy to achieve. Once you establish a complex Byzantine fault-tolerant mechanism among 21 decentralized participants, you will not see settlement failures. We have also solved other extension issues. From experience, Tendermint is very feasible, although we experienced the Luna crash early on, the problem was not with the voting algorithm mechanism.
I think we spend too much on settlement in terms of security, resources, and engineering, and not enough on research and execution, which is where most of the financial industry profits. Personally, I believe that if these technologies are to truly impact and reach a global scale, they must be superior to traditional finance in terms of price, fairness, speed, and other aspects. This is where we need to focus on research and competition.
a16z crypto: Do you think settlement is one of the aspects you choose to optimize the blockchain? People may over-optimize the blockchain for settlement and ignore other aspects such as throughput, latency, and composability, which are often at odds with the security of settlement. Can you talk about Solana’s architecture?
Anatoly Yakovenko: The mission of Solana’s architecture is to simultaneously transmit information from all over the world to all participants in the network at the fastest speed. So there is no need for sharding or complex consensus protocols. We actually want to keep things simple. Or, to put it another way, we were fortunate enough to solve a computer science problem, which is box synchronization (using verifiable delay functions as a time source in the network). You can think of it as two radio towers transmitting signals at the same time or same frequency, which creates noise. The first protocol that people came up with when building cellular networks was to equip each tower with a clock and have them alternate transmitting signals based on time.
An analogy is that the Federal Communications Commission is like a truck full of thugs. If your signal tower is not synchronized with the open license list network, they will drive up to your signal tower and shut it down. Solana is inspired by using verifiable delay functions to schedule block producers so that collisions do not occur. For example, in a network like Bitcoin, if two block producers produce a block at the same time, a fork will occur, similar to the noise in a cellular network. If we can enforce all block producers to produce blocks in an alternating manner based on time, you can get a good time division protocol where each block producer can produce blocks as planned without collisions. Therefore, forks will never occur, and the network will never enter a noisy state.
After that, everything we do is optimizing the operations of the operating system and database. We transmit data blocks globally like a Bitcoin flood, transmitting encoding blocks to different machines. In fact, they eventually look very similar to data availability sampling and have the same effect. Then they forward bits to each other, reconstruct blocks, and then vote, and so on continuously. The main design idea of Solana is that we strive to ensure that every process of the network or codebase only needs to update the kernel for scalability when designing.
If within two years, we can get twice the kernel for every dollar we spend, we can adjust it so that the number of threads per block is twice as much as before, or the computational workload per block is twice as much as before. Therefore, the network can achieve twice as much. All of this will happen naturally without any changes to the architecture.
This is the main goal we really want to achieve, based on my experience. From 2003 to 2014, I worked at Qualcomm. Every year, we could see improvements in mobile terminal hardware and architecture. If you don’t consider the possibility of expanding the software without rewriting it in the next year, you are not qualified as an engineer. Because your devices will quickly scale up, and to take advantage of this, you have to rewrite the code.
So, if you really need to think ahead, everything you build will only develop faster and faster. The biggest learning experience in my engineering career is that you can choose a carefully designed algorithm, but it may be wrong, because as the scale of hardware expands, the benefits of using such algorithms become negligible, and implementing their complexity now feels like a waste of time. So, if you can do very simple things and only need to expand the kernel, you may actually be able to achieve 95% of what you need.
Solana’s Building Concept
a16z crypto: Using proof of history as a way to synchronize the time across validators is a very innovative idea, which is why Solana is different from other consensus protocols.
Anatoly Yakovenko: This is a part of Amdahl’s Law, which is why it is difficult for people to replicate Solana in terms of accountless, latency, and throughput. This is because classical consensus implementations are based on step functions. An entire network, like Tendermint, must agree on the content of the current block before moving on to the next block.
The cell signal tower uses a timetable, and you only need to send a signal. Because there is no need to use step functions, the network can run quickly. I think this is like a kind of synchronization, but I don’t know if this word is appropriate. They keep transmitting and never stop to wait for consensus to run. The reason we can do this is because we have a strict understanding of time. Honestly, we can establish some clock synchronization protocols for redundancy, but the process may be very difficult. It is a huge engineering task that requires reliable clock synchronization.
This is the concept of Solana. Before I started building Solana, I liked trading, being a broker, and so on, even though I didn’t make any money. At that time, “flash boys” were prevalent in the traditional financial industry. Whenever I thought my algorithm was good enough, my orders would be delayed, and it would take longer for the orders to enter the market, and the data would also come in slower.
I believe that if we want to disrupt the financial industry, the basic goal of these open business systems is to make this situation impossible forever. This system is open, and anyone can participate in it. Everyone knows how to access it, how to obtain rights, such as priority or fair rights, and so on.
Within the limits allowed by physics and within the reach of engineers, achieving all this as quickly as possible is what I think is the fundamental problem. If blockchain can solve this problem, it will have a very big impact on other parts of the world, and many people around the world will benefit. This could become a cornerstone, and then you can use it to disrupt advertising transactions and monetization models on the Internet, and so on.
a16z crypto: I think there is an important distinction between pure latency and malicious activity, especially within a single state machine. Perhaps you can elaborate on which one you think is more important and why.
Anatoly Yakovenko: It is impossible to atomize the entire state because it would mean having only one correct global lock for the entire state, which would result in a very slow ordering system. Therefore, you need atomic access to the state and you need to ensure that. It is difficult to build software that operates on non-atomic remote states if you don’t know what side effects it will have on your computation. So, the idea is like submitting a transaction, either it executes completely or fails completely, with no side effects. This is one of the characteristics that these computers must have. Otherwise, I think it is impossible to write reliable software for them. You simply cannot build any reliable logic or financially reliable logic.
You may be able to build a consistent system, but I think that is another kind of software. So, there is always a tension between maintaining the atomic state of the system and performance. Because if you guarantee this, it ultimately means that at any given moment, you have to choose a specific writer globally to handle a specific part of the state. And to solve this problem, you need to have a single sequencer and linearize these events. This creates points where value can be extracted and system fairness can be improved. I think it is indeed difficult to solve these problems, and not only Solana, but Ethereum and Lightning Network also face these problems.
Solana and Ethereum
a16z crypto: One of the frequently debated issues, especially in the Ethereum community, is the verifiability of execution, which is very important for users because they don’t have very powerful machines to verify activities on the network. What is your view on this?
Anatoly Yakovenko: I think the ultimate goals of these two systems are very similar. If you look at the roadmap of Ethereum, you will find that its idea is that the overall network bandwidth is greater than that of any single node, and the network is already processing or computing more events than any single separate node. You have to consider the security factors of such a system. There are also protocols for proof of publishing fraud, sampling schemes, and so on, all of which are actually applicable to Solana as well.
So, if you step back and look at it, there is really not much difference. You have a system that works like a black box, creating so much bandwidth that is not practical for a random user. Therefore, they need to rely on sampling techniques to ensure the authenticity of data. Just like a very powerful gossip network that can spread proof of fraud to all clients. The things guaranteed between Solana and Ethereum are the same. I think the main difference between the two is that Ethereum is largely constrained by its narrative as a global currency, especially in competition with Bitcoin as a store of value.
I think it makes sense to allow users to have very small nodes. Even if they only partially participate in the network, instead of having the network entirely run by professionals. Honestly, I think this is a fair optimization, for example, if you don’t care about execution and only care about settlement, why not lower the node requirements and allow people to partially participate in network activities? I don’t think doing this can create a trust-minimized or absolutely secure system for the majority of the world. People still rely on data availability sampling and proof of fraud. And users only need to execute the majority of signatures on the blockchain to verify if something went wrong.
On Solana, a single transaction describes the action state fragments of all the people involved in the transaction. It runs on any device, such as a browser on a mobile phone, and can easily execute a single transaction with the majority of people’s signatures because everything on Solana is pre-specified, making it actually easier to build on Solana. Something like the EVM or any smart contract can access any state and randomly jump between them during execution. In some ways, it’s almost simpler. But I think, from a high level, users still rely on DAS and fraud proofs. In this regard, all the designs are the same.
a16z crypto: I think the difference lies in zero-knowledge proofs and validity proofs, especially fraud proofs. You seem to think that zkEVM is almost impossible to audit and they won’t develop within a few years. I want to ask you, why didn’t Solana prioritize zero-knowledge proofs and validity proofs like Ethereum?
Anatoly Yakovenko: I think there are two challenges here, one is how we prioritize them because there is a company called “white protocol” building zero-knowledge proofs for applications. The proofs are fast. Users won’t notice them during their interaction with the chain.
In fact, you can combine them. You can have a transaction on Solana call five different zk programs. So, this environment can save computing resources for users or create privacy for users, but it doesn’t really validate the entire chain. The reason I think it’s difficult to validate the entire chain is that zero-knowledge systems don’t handle a large number of sequential state dependencies well, the most typical example being VDF (verifiable delay function). When you try to prove that a sequential SHA, recursive SHA is 56, you will find that it crashes because the ordering state dependencies during execution greatly increase the constraints the system must have. And verification takes a long time, I don’t know if this is the best result in the industry, the latest result I saw on Twitter is that a 256-byte SHA takes about 60 milliseconds. This is a long time for a single instruction click.
So sequential computation, classical computation is necessary. And in an environment designed for execution, there are a lot of markets, and you actually have a lot of sequential dependencies. The market is very active. Everyone submits data directly to a pair of transactions, and everything around this pair of transactions depends on this pair of transactions. So, just like execution, this kind of sequential dependency is actually very long, which will result in a very long proof system.
Solana does not prohibit someone from running a zero-knowledge prover in a recursive way to prove the entire computation if it is feasible. But what users need is for their information to be written to the chain quickly during a transaction, in microseconds or milliseconds, and they need quick access to the state and some guarantees on the state. That’s the key to getting benefits.
Therefore, I think we need to address this issue, which requires the practical competitiveness of traditional finance. If this can be achieved, then we can start looking into zero-knowledge and figure out how we can provide these guarantees for users who don’t want to verify the chain or rely on these events, but maybe we can do it once every 24 hours or something similar. I think there are two different use cases, first, we must really solve the market mechanism problem, and then for other long-tail users.
a16z crypto: So what you’re saying is that Proof of Validity and ZK Proofs are excellent in terms of settlement, but they don’t really help in execution because of the long latency and their performance still needs improvement.
Anatoly Yakovenko: So far, that’s true. This is just my intuition, and it’s simple because the more active the chain is, the more hotspots there are that depend on state. They are not fully parallelizable and never interact with each other. It’s just a bunch of poorly written code.
a16z crypto: Another counter-argument could be that zero-knowledge proofs are experiencing exponential progress due to significant investment in this area. Maybe in 5 years, 10 years, the cost could decrease from the current 1000x to a more feasible level. You have a background in hardware engineering, and I’m curious to hear your thoughts on the idea of having a node perform calculations and generate proofs, then distribute the proofs to others, which might be more efficient than having each node compute on its own. What do you think about this viewpoint?
Anatoly Yakovenko: This trend is helpful for optimizing zero-knowledge systems in programs. More and more things are happening on-chain. The number of constraints will increase, and the speed will far exceed the speed at which you add hardware, and then you just keep adding hardware. This is my intuition. My feeling is that as the demand grows, such as the increasing computation on-chain, it will become increasingly difficult for zero-knowledge systems to keep up with low latency. I’m not even sure if it will be 100% feasible. I think it’s likely that you can build a system that can handle super large recursive batches, but you still have to run classic execution and take snapshots every second. Then, allocate an hour of computation time in a large parallel farm, verify between each snapshot, and start recalculating from there, but this takes time, and I think it’s a challenge.
I’m not sure if ZK can catch up unless the demand levels off, but I think the demand will eventually level off. Assuming hardware continuously improves, at some point the demand for cryptocurrency will saturate, just like Google’s search volume has possibly saturated at the present time. Then, you will start to see this happening. I think we are still far from that goal.
a16z crypto: Another major difference between these two models is Ethereum’s Rollup-centric worldview, which is essentially a pattern of computation sharding, data availability sharding, bandwidth, and network activity sharding. Therefore, it can be imagined that greater throughput can be achieved in the end because you can almost infinitely increase Rollups based on a single Rollup, but this means compromising on latency. So, what is more important? The overall throughput of the network or access latency? Maybe both are important?
Anatoly Yakovenko: I think the main issue is that you have Rollups and sorters, and people will extract value from the construction of sorters and Rollups, and in this system, you will have some common sorters to some extent. Their operations are no different from Citadel, Jump, brokers, traders, etc. They are all routing orders. These systems already exist. This design doesn’t really break the whole monopoly. I think the best approach is to build a completely permissionless commercial system where intermediaries cannot truly participate and start capturing the value of a globally synchronized state machine.
It is very likely that its actual usage cost will be lower because it is like creating a bunch of different small channels (pipes).
In general, the pricing of any given channel is based on the remaining capacity of that channel, not the overall network capacity. It is difficult to establish a system that fully shares network bandwidth. You can try to place blocks anywhere, like Rollup design, but they will all compete and bid. It’s not as simple as a huge pipe, and the price is based on the remaining capacity of this chained pipe. Because it is a bandwidth aggregation source, its pricing will be lower, but the final speed and performance will be higher.
Block Space and the Future
a16z crypto: I have heard you say that you do not believe that the demand for block space is infinite. Do you think that when web3 achieves mainstream adoption, the demand for block space will reach a balance point?
Anatoly Yakovenko: Imagine if Qualcomm’s engineers were told that people’s demand for cellular bandwidth is infinite and the code is designed for infinity, that would be absurd.
Actually, a goal would be designed to address this kind of demand, such as considering how much hardware is needed, when to start, what is the simplest implementation, how much is the deployment cost, etc. My intuition is that the most valuable transactions, 99.999%, may only require less than 100,000 TPS, this is my intuitive guess. And implementing a system that can achieve 100,000 TPS is actually quite feasible. The current hardware can achieve this, and Solana’s hardware can do it. I think a speed of 100,000 TPS may be the block space for the next 20 years.
a16z crypto: Could it be because block space is so affordable that people want to use it for various things, so the demand for it soars?
Anatoly Yakovenko: But there is still a bottom price. The price of purchase must cover the bandwidth cost of each validator. Just like the egress cost dominates the validation cost. If you have 10,000 nodes, you probably need to price the network’s per-byte usage at 10,000 times the normal egress cost, but that sounds expensive.
a16z crypto: So I think this is a question. Do you think that at some point, Solana will reach its limit, or do you think the monolithic architecture is already sufficient?
Anatoly Yakovenko: So far, people have done sharding because they have built systems with much lower bandwidth than Solana, so they have encountered capacity limits and started bidding for bandwidth. This has far exceeded the egress cost. Taking the egress cost of 10,000 nodes as an example, the last time I checked, the cost per megabyte of egress for Solana validators should be $1, which is a bottom price. You can’t use it to play videos. But its price is very low, you can use it for searching, you can basically put every search on the chain and get the results from your search engine.
a16z crypto: I think this is actually an interesting point, because we raised the question of “what is the ultimate goal of blockchain scalability” at the beginning of the podcast, which means that scalability is the most important issue for blockchain.
Chris has used a similar analogy before. The progress of artificial intelligence in the past decade is largely attributed to better hardware, which is the real key. So I think when we talk about the scalability of blockchain, it is also for the same purpose. If we can achieve a significant increase in TPS, everything will work normally. But an interesting opposing point is that Ethereum can complete 12 transactions per second, and the throughput of Ethereum itself is still larger than any single L2, with relatively high fees. On Solana, many simple transfer transactions have low costs. When we discuss this issue, we usually conclude that if our throughput reaches the next order of magnitude, many new applications that we cannot reason or think about now will emerge. To some extent, over the past few years, Solana has been a place to build applications, and many things are very similar to things built on Ethereum.
Do you think higher throughput or lower latency will unleash many new applications? Or will most things built on the blockchain in the next 10 years be very similar to the designs we have already proposed?
Anatoly Yakovenko: Actually, I think most applications will be very similar. The most difficult thing to crack is how to establish a business model, such as how to apply these new tools? I think we have already discovered these tools.
The reason why Ethereum transactions are so expensive is that its state is very valuable. When you have this state, anyone can write to it, and they will create economic opportunity costs to become the first person to write to this state, which effectively increases the cost. This is why valuable transaction fees are generated on Ethereum. To achieve this, many applications need to create this valuable state, so that people are willing to constantly write to it, and people start to compete to raise fees.
a16z crypto: Here I present a counterpoint. I think we easily underestimate the creativity of developers and entrepreneurs in the entire field. In fact, if you look back at history, such as the first wave of the internet and the web in the 1990s, it took us a long time to truly develop the main driver of interesting applications. For example, in the case of cryptocurrencies, it was not until around 2014 when Ethereum appeared that we truly had programmable blockchains, and things like Solana have only been around for about 4 years, so the time for people to explore and design is actually not long.
The fact is that there are still very few developers in this field. For example, we probably have tens of thousands of developers who know how to write smart contracts and truly understand the prospects of blockchain as a computer. Therefore, I think it is still early to develop interesting ideas on the blockchain. The design space it creates is so vast that I guess we will be surprised by what people will create in the future. They may not only be related to transactions, markets, or finance. They may appear in the form of shared data structures that are very valuable but fundamentally unrelated to finance.
Decentralized social networks are a good example, where the social graph is placed on the chain as a public good, allowing entrepreneurs and developers to build upon it. Because the social graph is on the blockchain and open, it becomes a valuable state maintained by the blockchain. You can imagine people wanting to publish a lot of transactions for various reasons, such as real-time updates to this data structure. If these transactions are cheap enough, developers will find ways to leverage them.
Historically, whenever computer speed increases, developers find ways to utilize the extra computing power to improve their applications. Our computing power is never enough. People always want more computing power, and I think the same will happen with blockchain computers. And there won’t be a limit, maybe not infinite, but I think the upper limit on demand for block space will be much higher than we imagine.
Anatoly Yakovenko: But on the flip side, the use cases of the internet were discovered quite early, such as search, social graphs, and e-commerce, back in the 90s.
a16z crypto: There are some things that are hard to predict. For example, bike-sharing was hard to predict. In fact, the form that search ultimately took was also hard to predict. I use things like streaming videos extensively in social networks, which was unimaginable at first.
I think, just like here, we can think of some applications that people might build on the blockchain. But given the current limitations and infrastructure constraints, some of these applications feel impossible to imagine. Once these limitations are lifted and more people enter the field to build, we can speculate and there may be many heavyweight applications in the future. So if we let it evolve freely, we may be surprised at how powerful it becomes.
Anatoly Yakovenko: There’s an interesting card game called “dot bomb,” the goal of the game is to lose money as slowly as possible, you can’t actually win or earn money. You manage a group of different startups with ideas from the 90s internet. Without exception, every so-called bad idea, like online grocery delivery and online pet stores, became at least a billion-dollar business at some point after 2010. So I think many ideas may start off bad or fail in the initial implementation, but eventually they will be adopted well in the future.
The Future Adoption of Blockchain
a16z crypto: So the question is, what do you think is the key for blockchain to go from its current applications to becoming mainstream on the internet? If not scalability, then what are the other obstacles, such as cultural acceptance of blockchain? Is it privacy issues? User experience?
Anatoly Yakovenko: This reminds me of the development history of the internet. I still remember how the whole experience changed. After I went to college, I got an email address. Every working person had an email address. I started receiving links with various content. Then the user experience on the internet improved, such as the emergence of Hotmail and the development of Facebook.
Because of this, people’s thinking has changed, and they understand what the internet is. At first, people found it difficult to understand what a URL is. What does it mean to click on something? What does it mean to enter a server? We have the same problem with self-regulation. We need to make people truly understand these concepts, such as what mnemonic words mean? What do wallets and transactions mean? People’s thinking needs to change, and this change is happening slowly. I think once every user who eventually buys cryptocurrencies and deposits them into their self-regulated wallets has this experience, they will understand this. However, so far, not many people have had this experience.
a16z crypto: You have created a mobile phone. Perhaps you can tell us where the inspiration for making a mobile phone comes from and how you think the current promotion is going?
Anatoly Yakovenko: My experience at Qualcomm made me realize that this is a problem with limitations. We can solve it, and it won’t make the whole company shift to the mobile phone business. So for us, this is an opportunity with low marginal costs, which may change the cryptocurrency or mobile industry.
It’s something worth doing. We worked with a company to manufacture a device, and when we collaborated with them to introduce specific cryptocurrency features, we received great feedback from people and developers, who thought it was like an alternative to the app store. But everything is unknown. For example, whether the application of cryptocurrencies under macro conditions is so eye-catching that people are willing to switch from iOS to Android. Some people are willing, but not many. Launching a device is very difficult. Basically, every device launched outside of Samsung and Apple has ended in failure, because the production lines of Samsung and Apple have been well optimized, and any new startup company is far behind these giants in terms of hardware.
So, you need some “religious” reasons to convince people to change, perhaps cryptocurrency is that reason. We haven’t proven this yet, but we haven’t disproven it either. Just like we haven’t seen a breakthrough use case where self-regulation is the key feature people need and are willing to change their behavior for.
a16z crypto: You are one of the few founders who can build both hardware and decentralized networks. Decentralized protocols or networks are often compared to building hardware because they are very complex. Do you think this metaphor is valid?
Anatoly Yakovenko: Like my previous work at Qualcomm. If there are hardware problems, it will bring a lot of problems. For example, if a tape is broken, the company will spend tens of millions of dollars every day for repairs, which can be catastrophic. In a software company, you can still quickly find problems and patch the software 24 hours a day, which makes it easier.
Community and Development
a16z crypto: Solana has done an excellent job in building its own community and has a very strong community. I am curious, what methods did you take to build the company and establish the ecosystem?
Anatoly Yakovenko: It can be said that there is a bit of luck involved. We started as Solana Labs in 2018, which was the end of the previous cycle. Many of our competitors actually raised several times more funds than us. At that time, our team was very small. We didn’t have enough funds to build and optimize the cdm, so we built a runtime that we believed could demonstrate this key feature – a scalable and unrestricted blockchain that would not be affected by the number of nodes or significant delays. We really wanted to make breakthroughs in these three aspects.
At that time, we only focused on building this fast network, without paying much attention to other aspects. In fact, when the network was launched, we only had a very simple resource explorer and command line wallet, but the network speed was very fast. This was also the key attraction for developers, because at that time there was no other fast and cheap network that could be used as a replacement, and there was no programmable network that could provide such speed, latency, and throughput.
This is actually why developers were able to develop. Because at that time, many people couldn’t copy and paste Solidity code, so everything started from scratch. The process of building from scratch is actually an engineer’s entry point. For example, if you can build the primitives you are familiar with in stack A and stack B, you can learn stack B from start to finish. If you can accept certain trade-offs, you can become its advocate.
If we had more funding, we might have made a mistake by trying to build EVM compatibility. But in fact, our engineering time was limited, which forced us to prioritize the most important thing, which is the performance of this state machine.
My intuition is that if we can remove the limitations on developers and give them a very large, very fast, and low-cost network, they can remove their own constraints. And this has indeed happened, which is amazing and admirable. I’m not sure if we would have been successful if the timing wasn’t right, for example, if the macro environment wasn’t right at that time. We announced on March 12th, and then on March 16th, the stock market and the cryptocurrency market both crashed by 70%. I think those three days might have saved us.
a16z crypto: Another important factor here is how to win developers?
Anatoly Yakovenko: This is a bit counterintuitive, you have to build your first program by chewing glass, it requires people to truly invest their time, we call it “chew glass”.
Not everyone will do this, but once enough people do, they will build libraries and tools that make it easier for the next developer to develop. For developers, doing this is actually something to be proud of, naturally libraries will be built, software will naturally expand. I think this is what we really want the developer community to build and chew on, because it really makes those people own it, and really makes them feel like they own the ecosystem. We try to solve problems that they can’t solve, like long-term protocol issues.
I think this is the origin of this spirit, you are willing to chew glass because you get rewards from it, you get ownership of the ecosystem. We can focus on making the protocol cheaper, faster, and more reliable network.
a16z crypto: What are your thoughts on the developer experience and what role programming languages will play in the field after gaining more mainstream applications? It is quite difficult to integrate into this field, learn how to use these tools, and learn how to think.
In the new model, programming languages may play an important role in this aspect, as the security of smart contracts has become an important task for engineers in this field. The risks involved are huge. Ideally, we will eventually see a world where programming languages provide much more help through tools, such as formal verification, compilers, and automation tools, which can help you determine if your code is correct.
Anatoly Yakovenko: I believe that formal verification is necessary for all DeFi applications. Many innovations happen here, such as building new markets, which are the places where hackers pose the greatest threats and where formal verification and similar tools are truly needed.
I think there are many other applications that are quickly converging towards single-node implementations and becoming trustworthy in terms of effectiveness. Once you can establish a single standard for a certain type of problem, it is much easier than a new startup building a new DeFi protocol because no one has written this kind of code before, so you must bear a lot of implementation risks, then make people believe in it and take risks by putting money in this protocol. This is where you need all the tools. Formal verification, compilers, move language, and so on.
a16z crypto: The programming world is changing in a very interesting way, because in the past, most programming was traditional imperative programming, similar to JavaScript. And when you write some code, it is likely to be incorrect and be broken, and then you fix it.
However, more and more applications are critical to the task, and for these applications, you need a completely different way of programming, a pattern that can better ensure that the code you write is correct. On the other hand, there is another type of programming emerging, that is machine learning, such as using data to synthesize programs. And both of these things are devouring the original form of imperative programming. Ordinary JavaScript code in the world will become less and less. There will be more code written by machine learning algorithms based on data. There will be more code written with more formal techniques that look more like mathematics and formal verification.
Anatoly Yakovenko: Yes, I can even imagine at some point, the verifier optimizes the smart contract language and tells LLM to translate it into Solidity or other Solana anchors. Two years ago, people might not have believed it, but with Gpt 4, there have been many significant advancements.
a16z crypto: I like this idea. You can use an LLM to generate program specifications that satisfy certain formal verification tool requirements. Then, you can ask the same LLM to generate the program itself. Then, you can run formal verification tools on the program to see if it really meets the specification requirements. If not, it will give you an error, which you can feed back to other LLMs to try again. You can keep doing this until you generate a verifiable and formally verified program.
Ecosystem and Talent Recruitment
a16z crypto: We are discussing how to build a strong ecosystem. Many blockchains become decentralized almost immediately after launch, to the point where the core team no longer participates in forum discussions or tries to help other partners get involved. But you seem to be very committed from the start, from the network launch to entering the market. I think this may be a significant advantage in building the Solana ecosystem.
Anatoly Yakovenko: To quote a saying, decentralization is not the absence of leadership, but rather diverse leadership. I still remember how difficult it was to take Linux seriously in a big company like Qualcomm, even the idea of running Linux on mobile devices seemed ridiculous. When I joined, the entire community was working hard to convince everyone that open source was meaningful, and I think that’s what we need to do, the network needs to be decentralized.
But that doesn’t mean there is no leadership. In fact, you need a lot of experts constantly telling people the benefits of using this specific network and its architecture, constantly getting more people to join and cultivating leaders who can spread knowledge and educate worldwide. But that doesn’t mean everything happens under one roof. If the network and the code are open, anyone can contribute and run it. Naturally, it becomes decentralized. You will naturally see leadership emerge from unexpected places.
Our goal is to foster development all around, to make our voice one among many, not to silence others. We pay a lot of attention to hackathons, fans, and try to connect them to each other and involve them in this cycle. It’s like a flywheel. We try to connect people with developers from all over the world, have as many one-on-one interactions as possible, and then get them all to participate in hackathons, compete, and push them to build their first or second product.
Among cryptocurrency users, only a few products can enter the market, receive venture capital, and have a scalable user base. In my opinion, this means we don’t have enough creativity. We don’t have enough founders aiming for targets and finding truly scalable business models that can reach millions of users. Therefore, we need a large number of companies to compete and see if they can come up with brilliant ideas. That is the biggest challenge.
a16z crypto: One related question is how to involve the community in developing part of the core protocol itself? This is one of the trickiest balancing issues for any blockchain ecosystem. On one hand, you can allow the community to actively participate, but on the other hand, your flexibility may be reduced. Additionally, governance processes involve more people and can be difficult to coordinate. On the other hand, you can also control things in a more top-down manner and therefore develop faster. But in terms of community involvement, you will be influenced to some extent. How do you find a balance?
Anatoly Yakovenko: In general, when I was working at the foundation, we saw people actively contributing to what they wanted to do. Then they would go through a proposal process and there would be a grant or other incentives involved. It’s similar to an interview process, like when I hire someone for the lab, it could be that the company culture doesn’t match with the person or it could be some other reason, but it doesn’t mean the person is bad, just that something didn’t click. Similarly, you’ll find engineers already submitting code and contributing to the codebase. They already know how to culturally merge code and how to deal with open source directions. When you find people who can solve problems on their own, you can give them funding, which is very important, to make sure you can find truly excellent people who can submit code and are willing to work on it long-term.
a16z crypto: What do you think is the best way to run decentralized governance protocols today?
Anatoly Yakovenko: Like L1, the approach we take seems to be effective, like Linux, moving forward and avoiding vetoes from any participants as much as possible. It takes the path of least vetoes. Honestly, there are many participants who can veto any changes, they may feel that the change is not good or that it shouldn’t be changed. But we have to make the system faster, more reliable, and use less memory, and no one will oppose these changes.
Ideally, we have a process where you release a design and everyone spends three months discussing it. So, before merging, everyone has plenty of chances to look at the code and decide if it’s good or bad. This process may seem a bit lengthy, but it’s not in practice. If you’ve worked at a large company before, basically when you work with Google or Qualcomm, you know you have to talk to a lot of people, push it, make sure all the key stakeholders, like key people who touch the codebase, can accept it, and then slowly get it done. It’s more difficult to make radical reforms. Because many smart people are looking at the same thing, they might really find some mistakes and then ultimately decide.
a16z crypto: How do you consider talent recruitment?
Anatoly Yakovenko: In terms of engineering, our requirements are often high, and we usually hire quite experienced personnel. My hiring approach is that in the initial stage, I will put effort into something so that I know how it should be done. Then I will tell the new employees that this is how I do it. I don’t expect them to complete it within 90 days or surpass me. I can evaluate them during the interview and tell them about the problem I am currently solving. I need someone to take over so that I can focus on unknown things. In a startup, if you are the CEO, it’s best not to give others an unknown problem because you don’t know if they can solve it.
When the ecosystem develops to a certain extent, a PM is needed. At that time, I spent too much time answering questions, even until 2 am. I thought at that time that someone else should do this, and now I know what this job is all about.
a16z crypto: How important do you think privacy will be for blockchain in the future?
Anatoly Yakovenko: I think the whole industry will undergo a transformation. First, some visionary people will pay attention to privacy, and then suddenly, large payment companies or other companies will adopt this technology and it will become the standard. I think it needs to become a feature – if you don’t have this feature, you can’t compete. We haven’t reached the level of market maturity yet, but I think we will. Once many people use blockchain, every merchant in the world will need privacy. That’s just the minimum requirement.
a16z crypto: What impact does the Solana architecture have on MEV? Does the leader have too much power to reorder transactions?
Anatoly Yakovenko: Our initial idea was to have more than one leader per slot. If we can get as close to the speed of light as possible, about 120 milliseconds, then you can have a discrete batch time auction every 120 milliseconds globally. Users can choose the closest or highest rebate from all available block producers. In theory, this could be the most efficient way to run finance, either I choose delay and send it to the nearest block producer, or I choose the highest rebate and delay dollar transactions. This is a theory, and we haven’t tested having multiple leaders per slot, but we are close to this goal and I think it may be feasible, perhaps achievable next year.
I think once we achieve this solution, we will have a very powerful system that essentially forces competition and minimizes MEV.
a16z crypto: What is your favorite system optimization in the Solana architecture?
Anatoly Yakovenko: I like the way we propagate blocks, which was one of our early ideas and one of the things we really needed to do. We can scale the number of nodes in the system in the network, and we can transmit a large amount of data, but each node has a fixed and limited amount of export volume that it must share.
If you consider it at a high level, every leader will break it into threads when creating a block, and encode these fragments. Then, they transfer the fragments to a node, and the node sends them to other nodes on the network. Because all the data is mixed with the encoding, the reliability of the data is very high as long as someone receives the data. The number of nodes propagating the data is very large, so the probability of failure is very low unless 50% of the nodes fail, which is very unlikely. So this is a very cool optimization, and its cost is very low, and its performance is very high.
a16z crypto: How do you view the future application development of cryptocurrencies? How will users who do not understand blockchain adopt blockchain in the future?
Anatoly Yakovenko: I think we have some breakthrough applications and payment methods, because using cryptocurrencies for payments has obvious advantages compared to traditional systems. I think once regulations are in place and Congress passes a few bills, payments will become a breakthrough use case. Once we have a means of payment, I think the other side will also develop, such as social applications, which can be messaging apps or social graph apps. These applications are currently growing slowly. I think they are in a golden period of takeoff and will reach a truly significant number.
Once mainstream adoption is achieved, it is possible to iterate, understand what people really want, and provide them with these products. People should use products for their utility, not for tokens.
a16z crypto: What advice do you have for builders in this field or builders outside of this field? Or any advice for those curious about cryptocurrencies and Web3?
Anatoly Yakovenko: What I want to say is that now is the best time. The current market is relatively low in terms of macro, there is not much noise, and you can focus on the fit between the product and the market. When the market turns, these discoveries will greatly accelerate your development. If you want to work in the field of artificial intelligence, people should not be afraid to start AI companies or cryptocurrency companies or other companies now. You should try and build these ideas.
But what I want to say is that people should try to create greater ideas instead of repeating what already exists. The best analogy I’ve heard is that when people discovered cement, everyone focused on building bricks with cement. Then one person thought, maybe I can build a skyscraper. They figured out a way to combine steel bars and construction, which no one thought of. The new tool is cement, you just need to figure out what a skyscraper is and then build it.
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- FTX Latest Debt and Asset Summary How much money is owed and how much debt can be repaid?
- FTX’s approval for liquidating $3.4 billion worth of tokens this week, what impact will it have on the market?
- FTX may be approved to liquidate $3.4 billion worth of tokens this week. What impact will it have on the market?
- August DeFi Market Review and Outlook Which Protocols and Airdrops to Focus on?
- Against the Wind How to Use Web3 to Ignite the Market and Drive Growth?
- Detailed Explanation of New Changes in US Cryptocurrency Accounting Rules What Impact Does It Have on MicroStrategy, Coinbase, and Others
- Nigeria Second Largest Bitcoin Adopting Country, the Cradle of Cryptocurrency Growth