Scroll Talk #1 Scroll and ZK Take the Journey Together

Scroll Talk Exploring the Journey of Scroll and ZK

Source: Scroll CN

Scroll Talk is a podcast program hosted by Scroll CN. Through various formats, we will have conversations with the Scroll team and Scroll ecosystem projects to help everyone better understand Scroll.

In this episode, we invited Ye Zhang, co-founder of Scroll, to discuss topics related to Scroll and ZK, including the design and trade-offs of zkEVM, choice of proof systems, hardware-accelerated proving networks, and the future of ZK.

F.F: Hello everyone, welcome to Scroll Talk. Today, we are delighted to have Ye Zhang, co-founder of Scroll, here with us. Previously, Scroll CN has published many interviews and speeches about Ye. This should be the first time we have a face-to-face conversation with Ye, and we would like to express our gratitude for his presence. Ye has great influence in the zero-knowledge proof community, but we still want Ye to start with a simple self-introduction.

Ye: Hello, and thank you to Scroll CN for arranging this interview. I would also like to express my gratitude to Scroll CN for their contributions in the Chinese community, including the high-quality translations that have helped us have a significant impact in the Chinese community. Let me briefly introduce myself. Hello, everyone, my name is Ye Zhang, and I am one of the co-founders of Scroll. My main focus is on research related to zero-knowledge proofs. I have concentrated on three main directions.

The first direction is hardware acceleration of zero-knowledge proofs. I started working on this direction about 5 years ago because back then, the biggest bottleneck for using zero-knowledge proofs was the slow proof generation. For example, applications like Zcash took a long time to generate a proof for a transaction, possibly 10 minutes or even longer. This resulted in zero-knowledge proofs not being widely adopted by many systems due to low proof efficiency. So, my first research direction was how to accelerate proof generation using GPU, FPGA, and ASIC hardware.

The second direction is more inclined towards the cryptography and mathematics behind zero-knowledge proofs. Because zero-knowledge proofs are complex cryptographic protocols involving a lot of mathematical concepts like polynomials, my main research work involves reading a lot of papers and finding ways to optimize existing algorithms, which is more theoretical.

The third direction is more application-oriented, focusing on designing architectures and circuits for zkEVM and how to generate proofs for zkEVM.

In summary, my research covers three main areas: hardware acceleration of zero-knowledge proofs, theoretical algorithms of zero-knowledge proofs, and applications related to zero-knowledge proofs.

In Scroll, I mainly focus on research-related work, including zero-knowledge proof research, protocol design research, and some company-related strategies.

F.F: Thank you, Ye. So, we know that you have been doing research on ZK. What prompted you to start Scroll and continue to delve into the ZK field? What motivates you to keep moving forward?

Ye: This is a unique story. Nowadays, most people have heard about zero knowledge proofs or have learned about ZK because they realize there is a demand for zero knowledge proofs in blockchain. However, my learning process was the opposite. I was actually attracted to ZK first and then discovered that it can be used in blockchain. During my undergraduate years, I was conducting research on hardware acceleration algorithms alongside a senior student in the lab. At that time, AI acceleration was gaining popularity, but I wasn’t very interested in AI. I felt that the process of tuning parameters in AI didn’t have a mathematical model that I could understand, as in why the parameters through training would yield certain results. I preferred mathematics with stronger determinism, where I could know the probability of events occurring. So, I naturally gravitated towards cryptography and number theory. That’s when I came across the algorithm for zero knowledge proofs and realized that it had a significant need for hardware acceleration. That’s when I started studying this field. Later, during my research on acceleration algorithms, I discovered that the algorithm itself had more allure than just improving hardware performance. It involved clever polynomial constructions and protocol designs. If you dive deep into any zero knowledge proof protocol, you will discover its ingenious nature. It encodes programs using polynomial designs and verifies the properties of these polynomials using polynomial points. Finally, it compresses the proof into a very small size. The entire mathematical structure is remarkably ingenious. So, initially, I entered the zero knowledge proof industry purely because I was captivated by its mathematical constructions. Later, I discovered that what I was researching happened to address the biggest problem of blockchain at its core, which is scalability.

Later, I also became aware of the thriving Ethereum ecosystem, which has a great open-source community that aligns well with my personal beliefs. Its research atmosphere, open-source attitude, and pursuit of academic rigor completely attracted me. At the same time, I realized that blockchain is not just a fictional concept, but a real architecture that can solve many problems in people’s lives. It could be the next generation of financial infrastructure. Many people truly need transparency and resistance to censorship. So, I believe blockchain has real-world applications, and at the same time, my technology can address this problem.

At the beginning of 2021, it was the perfect timing because the efficiency of zero-knowledge proofs has increased by 2-3 orders of magnitude. When a technology sees such a significant improvement, it presents enormous opportunities, be it for startups or other new ventures. Problems that couldn’t be solved before can now be addressed. At that time, I saw zkEVM as the biggest opportunity, as there weren’t many people working on it or had achieved results. With the right momentum and the accumulation of such technology, we started working on Scroll.

In fact, during my PhD, I was also researching zero-knowledge proofs. However, I realized a limitation: in academia, like in Scroll, one has greater flexibility for ZK-related research, whereas in school, you must collaborate with a mentor, limiting your research to a specific direction.

But with Scroll, you have more flexibility. Since you’re solving real-world industry problems, the impact of solving them will be greater. Furthermore, you won’t be limited to just the direction of a PhD, as you can collaborate with more people through grants and other means. So, in essence, I’m doing the same thing as in Scroll, but in the industry, it carries more influence, solving real-world problems, and collaborating with a broader range of people. That’s why I find this path more appealing than pursuing a PhD.

ZK Technology Development and Future

F.F: I understand. Thank you Ye, I can see the fascinating mathematical charm behind ZK, which has captivated you to continue your research in this field. From what I’ve heard, a major breakthrough happened a couple of years ago, something similar to ChatGPT’s sudden emergence this year, with a significant impact. Is that correct?

Ye: Yes, absolutely. However, it wasn’t like ChatGPT, an explosion that caught everyone’s attention instantly. It was a process involving multiple factors. For example, the hardware acceleration aspect I had been researching can boost the efficiency of zero-knowledge proofs by 10 to 100 times. Additionally, new representations of polynomial circuits, like using higher-order custom gates and lookup tables, can lead to a 10-fold reduction in costs. Moreover, recursive proofs can aggregate numerous proofs, saving a lot of verification costs. So, I believe these three factors combined resulted in a tremendous efficiency improvement.

Ultimately, the result is increased efficiency, but unlike ChatGPT, it is the outcome of the efforts of cryptography experts and hardware engineers.

F.F: Since we are talking about AI, what do you think about the combination of ZK and AI, including the recent release of Worldcoin using ZKML technology? What differences do you see between the intersection of ZKML and the separate fields of ZK and AI?

Ye: I think there are indeed many people working on ZKML, but I believe this direction is still in its early stages. It has some applications, such as verifying if a photo was taken by your camera without complex photo editing; proving if an audio belongs to a specific person; and verifying if Microsoft is providing the same model to everyone, as when you input data and it returns an output, you can’t determine if it will use different models for different individuals. There are some small applications like these, but I haven’t seen a significant demand yet. Why should ZKML be as widely applied as AI?

For example, with ChatGPT, most companies that own the model have absolute strength. They don’t need to prove to you that they must use that particular model, and you can’t demand that they do. I think unless there is a market with ten companies like ChatGPT, and ChatGPT chooses not to participate, while one company is willing to do it, a group of users with such demand will choose the service provided by that company. But currently, there are only a few companies in the market capable of developing models like ChatGPT, and they lack strong motivation or incentives to do such a thing for you. So I think this road is still quite long, especially for solving problems like photos or audio, where many challenges remain. You may also need hardware to build such a system.

Overall, I believe there is still a long way to go. However, I think ZKML will have some new gaming strategies in liquidity management and may have some small use cases. But finding larger applications that match the market will definitely take time to validate. Moreover, ZKML cannot prove that the training process is correct; it can only prove the correctness of an influence factor. This further limits what it can do, and I think there is still some distance to cover.

Most ZKML companies are still developing tools. I know some ZKML companies are trying to convert code written in TensorFlow or Pytorch directly into ZK circuits to generate proofs, which could be an interesting direction. They start with DSL, SDK, and encourage everyone to have new innovations, but it is still in the early stages. I think ultimately it may develop into ZK-ification of general computing, but more suitable for ML algorithm libraries, such as matrix multiplication or convolution, which would be more beneficial for such applications rather than solely focusing on ZKML. I believe there is still a long way to go.

Then there is Daniel Kang, a professor at UIUC, who has done a lecture at Scroll’s ZK Symposium before, so if anyone is interested in this area, you can check out our series.

F.F: Alright, thanks Ye. And from what I’ve heard, ZKML is still in its early stages. It’s currently focused on smaller development directions, and when it comes to general computation, it’s still quite early. It may only find its place in the market when there is a high demand for privacy. Looking at the bigger picture of ZK, Vitalik made a statement saying that ZK and blockchain are equally important concepts. How do you view this perspective?

Ye: I completely agree because ZK really solves a lot of problems that blockchain alone can’t handle. It’s a perfect combination. Blockchain can’t solve the scalability issue, but ZK can compress computations and deal with scalability. Blockchain is always transparent, it can’t address privacy concerns, but ZK can hide information and solve privacy problems. So, in my opinion, ZK and blockchain are a natural and excellent combination.

In addition, ZK’s support for general computation is growing rapidly, so I think it has a great opportunity. For example, in terms of privacy, like private transactions, privacy pools, and even some on-chain poker games where you don’t want others to see your cards after they’re dealt on the chain, ZK can hide information through zero-knowledge proofs. This type of hidden information game can only be achieved on the blockchain through ZK. In terms of privacy, ZK Identity is also an interesting direction with great potential. How to get billions of users to adopt blockchain may require us to digitize some existing identity systems using ZK, so that people are willing to share information on the blockchain.

In terms of scalability, various techniques like Rollups to compress computations and co-processors to compress calculations and then place the proofs on the chain, provide a very good combination of on-chain and off-chain solutions.

There are also some other interesting small directions with a lot of potential. Some teams are building ZK cross-chain bridges or ZK hardware to provide services. But I think it will take a few more years to fully mature. Whether it’s in terms of the convenience of developer SDKs or the efficiency and security of ZK, there is still a long way to go.

F.F: I understand, thank you Ye. Based on your description, ZK and blockchain complement each other. Besides the application scenarios we mentioned earlier, from the perspectives of efficiency and fairness, what changes do you think this technological innovation will bring to the real world?

Ye: I think we can turn any computation into something that doesn’t require trust, which is a very powerful feature. You can throw any computation onto a trustless platform and have it return a result, generating proof to verify its correctness. This ensures the accuracy and verifiability of your computations. Then there are various applications such as identity, privacy, scalability, and more, as I mentioned.

F.F: I see, thank you Ye. So, ZK might have a significant impact on general-purpose computing, providing both privacy and trustworthiness, which is a promising direction. If you weren’t working on Scroll and had the opportunity for a second venture in the ZK field, which track or direction would you choose?

Ye: That’s a difficult question. I believe zkEVM is definitely the biggest direction, as it carries the traffic from Ethereum, it will undoubtedly be the largest. If I had to choose another track, I personally think coprocessors are quite promising. They can efficiently perform non-EVM computations while ensuring verification. Another direction would be identity protocols. Building a robust identity system is a challenging task, and it can address many real-life issues. Especially when I visited Africa and witnessed various problems arising from underdeveloped financial infrastructure, I believe identity would be a significant direction.

If I had to make a personal choice, I think if the scale isn’t significant, identity presents a great opportunity. However, with a powerful engineering team, you should aim for more complex tasks, and the ZK coprocessor could be a favorable direction. Although this track already has many participants. So, I feel identity is a track that hasn’t reached widespread adoption yet, and it requires not only technology but also a business strategy. You need to consider which business partners to collaborate with and whether you can convert their extensive data into ZK to rapidly expand your user base. Technology might be a smaller problem in this case.

If you are a highly innovative individual, you can also explore the direction of ZK games. Games require excellent design skills to ZK-ify the hidden information. However, ZK is not a universal tool and cannot solve all privacy issues. It will require proofers to know certain information. So, I think designing a game for ZK requires great ingenuity to make the most of it. If you are full of ideas and love playing games, thinking about your game logic and creating an interesting ZK game can also be a fascinating direction.

F.F: Thank you, Ye. You mentioned three directions earlier: the first being coprocessors, similar to what Axiom is working on; the second is the identity direction, which can be understood from what Worldcoin is doing as a special example; and the third would be games, which would be something that ordinary users could encounter in their daily lives. By the way, Ye, you mentioned that you recently returned from Africa. Did you have any insights or achievements while promoting and spreading ZK technology, including Ethereum, there?

Ye: This time was another unique experience. Let’s first give a brief introduction. In February of this year, Vitalik, Aya from the Ethereum Foundation, and a few others went to four countries in Africa. They spent almost a month there, organizing events with the African community, meeting African founders, and understanding the situation on this continent. Ethereum’s presence in the African community is still relatively small, so they went there to understand the current state of the community and what it needs, to spread the value of Ethereum. Their conclusion was that Ethereum is still a bit expensive.

They wanted to plan a Layer 2 trip, bringing Ethereum’s Layer 2 to Africa. Because people in Africa can’t afford Ethereum, they can only access the Ethereum world through Layer 2. So, sometime around April or May this year, through an introduction by Vitalik and the organizer of their previous trip, Joseph, we explored the opportunity to organize a Layer 2 trip. After the discussion, we found that our values were very much aligned. One of Scroll’s values is bringing real users and use cases into the blockchain, so we were excited to understand the real needs in Africa.

After going there, I realized it was really different, which gave me more confidence in the real use cases of developing and emerging countries. Before going to Africa, including many people I knew, there were doubts about whether blockchain was a real need. Whether it was just a scam or a tool for everyone to issue tokens. People with such views actually have an understanding of blockchain as being about whales and liquidity mining, whether it’s in China, other parts of Asia, or in the West like the US and Europe. They don’t really need blockchain in their lives; they just think there are some tools that can give them more profit. Sometimes, they might feel that their assets would be safer by having them on the blockchain. It’s not a particularly urgent tool for them.

We visited two countries, Kenya and Nigeria, and it became clear to us that people there really need blockchain as a platform for their daily lives. One very obvious example is that it’s not possible to transfer money directly between two neighboring countries through a bank; it requires a long detour. Their financial infrastructure is poorly developed and unable to support a global system.

So, what they really need first is a payment tool, and blockchain is extremely useful as a payment tool; it can truly change their lives. Because if they want to transact with neighboring countries, they need a blockchain payment medium. Many people say, “What can blockchain do? It’s just a global payment system, it sounds like a single-purpose tool.” But a global payment system can meet the needs of many people, especially in countries with underdeveloped financial infrastructure. However, in China, the US, and Europe, where infrastructure is well developed, you never have to worry about such problems.

The second thing is that their inflation is very high. Their currency may have experienced an inflation rate of 10% from the time we went there until now. Just imagine, holding Renminbi or dollars in your hand and seeing them depreciate by 10% in a month, while your investments might only grow by 3-4% per year, and prices keep rising. This greatly affects their lives, and stablecoins are a way for them to obtain dollars. They need dollars because the inflation rate of the dollar is relatively low, so they hope to acquire dollars. However, they cannot obtain dollars directly because they cannot open a bank account in the United States. So they actually buy USD stablecoins and hold some assets on the blockchain. Obtaining USDT is a very important way for them to prevent severe inflation. Maybe in China, holding Renminbi is sufficient, and only when buying cryptocurrencies is USDT needed. But for them, it is a real necessity in their daily lives. They frequently engage in OTC transactions and convert USDT into their own currency when they actually use it. So I think this is a significant use case, and in these countries and many other places, they truly have this demand.

The third thing is that their financial infrastructure is not sound, resulting in poor credit ratings and identification when borrowing. Therefore, when they borrow money, for example, borrowing $100 may take a month or more, and various approvals are required because information does not flow smoothly between financial institutions. This means that lending, which is a significant business for banks and many financial institutions, is very inefficient in their case. So I think this is also a huge opportunity.

There are many real-use blockchain applications in Africa. For example, if there is a good identity system to solve these problems and provide them with some lending or other services on the blockchain, I think it would be very valuable. This is the first time I have felt that your technology is truly changing the lives of people in many corners of the world, which is very important.

Part of Scroll’s values is to bring the next billion people into Ethereum. People often criticize BSC for being centralized and Ethereum for being expensive. But BSC has many real users because of Binance. For the first time, in Africa, I saw many people actually using Binance for payments because it is simple and easy to use. We hope to bring back these real users to Ethereum, and this is part of our mission. We want to bring back the next billion users to a more trustless Ethereum after reducing the fees through Layer 2. Because if you keep your money in a centralized exchange, there may be some problems. So we hope to put it on a Layer 2 solution and inherit the security of Ethereum. This is a great opportunity.

Imagine a future where cryptocurrencies play a crucial role in everyday life, and blockchain technology is adopted in the real world, especially in emerging economies.

  • Children in Turkey can buy ice cream on a hot summer day using stablecoins on Scroll, with just one click to convert their cryptocurrencies into Turkish Lira.

  • An elderly person in Argentina can receive government benefits and subsidies on Scroll, reducing fraud and ensuring fair distribution of funds.

  • Philippine businessmen can make cross-border remittances in seconds using Scroll, without the need for many intermediaries.

  • Farmers in Kenya can access loans through Scroll’s transparent credit scoring system, solving trust issues and increasing the utilization of working capital.

All these things will happen simultaneously, including institutional adoption, government-issued stablecoins, and relaxed legal compliance in different regions.

We believe that the next billion users will come from places that truly need cryptocurrencies. Scroll aims to bring these users into the crypto ecosystem and solve real-world problems such as financial inclusion, social coordination, and individual sovereignty.

Additionally, our second point is that Scroll’s core values are not about excessive marketing and promoting ourselves in different places. We want to bring educational and research resources to Africa or similar regions, to accelerate their learning in this field, not just for Scroll, but for blockchain education as a whole. Some projects have tried various strategies in Africa, but they mostly just give out grants without much consideration for long-term community development. It’s not a very value-driven community. We want to do the right things by bringing educational resources to Africa, understanding the real needs of the local communities, and providing customized help instead of just throwing money. We genuinely care about the people and the communities in these places, and it’s something we are deeply concerned about. I think this applies to many applications today as well. Many apps just deploy from one blockchain to another, targeting the same hunters or Western users. If we can truly diversify our user base, it would be a huge advantage for the application ecosystem on our blockchain. Attracting people from different places to try and experience your app is a significant direction we are considering.

zkEVM

F.F: Thank you so much, Ye, for sharing your insights about Africa. It seems like developing countries indeed present a tremendous opportunity because they lack the infrastructure that our generation already has. This blank slate allows for the direct application of new infrastructure into their daily lives. And Scroll can leverage this market to bring blockchain to the next billion users.

So let’s talk specifically about Scroll’s development of zkEVM. The classification of zkEVM is a topic that has been discussed over and over, but we all know it’s a trade-off between performance and compatibility. Scroll has been working with the PSE team to build Type1 zkEVM, which is the most compatible option. Our question is, with the development of zk technology, is it possible to break this trade-off in the future? Or will everyone naturally choose a more compatible direction once the performance improves?

Ye: First, let’s talk about our technology stack. Since the beginning of 2021, we have been working with the ZK team of the Ethereum Foundation, also known as the PSE team, to build Type1 zkEVM. About half of the codebase contributions come from us, and the other half comes from PSE, with some occasional contributions from the community. We have always been supportive of this community-driven open-source development atmosphere and have been committed to contributing code to Ethereum. The goal of this project is to build a Type1 zkEVM that can truly be used on Ethereum’s Layer 1, to change Ethereum’s roadmap and build its future, not just for ourselves. It is a community version of Type1 zkEVM built by us, PSE, and other community contributors. So it’s not just our effort, but everyone’s contribution.

As for Scroll itself, we need a mainnet, a version with complete product features and more thorough audits. According to our current evaluation, the proof cost for Type1 is 10 times that of Type2, so we believe that even if you want to build a Type1 zkEVM, you need to transition from one stage to another, you need to test your architecture. So we think the best way is to first build a Type2 version, and Ethereum’s entire architecture is constantly evolving. When you have a Type1 with sufficient performance, the Ethereum architecture has already changed, and you may need to make further adjustments. So we believe that the main difference between Type1 and Type2 lies in whether the storage is shared or not. Currently, our focus is mainly on getting a Type2 to a state where it is usable as a product. The current codebase has evolved from the community version we collaborated on. We have changed its storage and designed some other modules accordingly, optimized the GPU prover, and optimized many other things, ultimately compressing the proof time on our GPU prover to about 10 minutes. This is a very efficient zkEVM. However, we will still continue to help Ethereum build a Type1 zkEVM, to see how it can become more robust and to build Ethereum’s future. So our mission is to build a high-performance, product-ready, and comprehensively audited Type2 zkEVM. At the same time, we are also helping Ethereum build a Type1 zkEVM.

Because we believe that there is still a significant performance gap and need for practical testing, we are focusing on a Type2 zkEVM. This will not affect any compatibility, as all contracts and tools such as Foundry, Remix, Hardhat, etc., are fully compatible without the need for any plugins. Additionally, our upcoming testnet or mainnet will support precompiled contracts like LianGuairing, demonstrating excellent compatibility. However, we believe that entering the next stage will still require a lot of effort. Furthermore, we adhere to the principle that without proof, it cannot be considered a secure Layer2. We prioritize security in zkEVM or zkRollup.

Regarding our development status, it is already quite advanced. All opcodes, including push0, are supported. We may be the only zkRollup that supports push0, and not just zkRollup, but the first Rollup to do so. We are the only zkRollup that supports LianGuairing and can verify LianGuairing as well. This is the progress and compatibility of our own development, and our audit is already underway. You can check out our Sepolia blog for more information.

As for the direction of compatibility development among various parties, my personal speculation is that everyone will ultimately lean towards stronger compatibility, except for Starkware. Since it has Kakarot to support zkEVM, it won’t consider another direction. It should stick to the Cario language. After zkSync went live, the feedback we received from developers is that it still requires a lot of code changes, and they still don’t fully trust its security. For contracts, security is the most important aspect. Even if the efficiency is high, if it requires code changes and re-auditing, the cost falls on the developers’ side. This is not a very sustainable development direction. Therefore, we believe that achieving high compatibility and avoiding code modifications for developers is extremely important. I think everyone will strive towards this direction.

However, I don’t think Type1 is a very accurate way to generalize. I simply mention Ethereum equivalence, EVM equivalence, and language-level compatibility. I find this a more intuitive and direct way to describe it, as there is no precise definition. Vitalik only vaguely provided a classification method, which I believe can be changed. It’s challenging to define a company’s vision based on the current stage. Just like our vision is to initially launch our testnet and mainnet on Type2, conduct practical and performance tests, and then consider other upgrades while continuously helping Ethereum build zkEVM on Type1. This is our approach. Using Type1 and Type2 is not accurate because it is a phased goal.

In summary, if you want to implement zkEVM, it must be developed towards compatibility because the proof technology has significantly improved. It can make zkEVM very fast, so sacrificing compatibility for just two or three times more efficiency is unnecessary. I even question whether it could be faster. Therefore, I believe it will continue to develop toward better compatibility.

F.F: Thanks Ye, Scroll has always been focused on improving performance while maintaining compatibility. Other platforms may choose different trade-offs, but ultimately everyone may move towards Ethereum compatibility, except for Starkware. From a user’s perspective, there are two main aspects to consider: speed and cost. In terms of speed, Scroll has consistently maintained a block time of around 3 seconds, while Polygon used to have a block time of over 10 seconds, which has recently been reduced to around 3 seconds, and Linea currently also has a block time of around 3 seconds. I wanted to ask, what determines the 3-second block time? Is it determined by the performance of the sorter, because currently everyone is using centralized sorters, and commonly known Alt L1 platforms have even shorter block times, and they operate at the consensus layer. What considerations or bottlenecks are there in this area?

Ye: The 3-second block time we have designed can actually be made even faster. The current definition of 3 seconds is based on our current network capacity since if it is reduced to a shorter time, more attestors will be needed to promptly verify the blocks. If it is too short, it could become a bottleneck in terms of data on-chain. Additionally, the actual throughput of zkRollup typically does not reach the level of several thousand, but rather only a few tens. So to be honest, having a very fast block time doesn’t provide much benefit. Centralized sorters can achieve high speeds, but it is just an intermediate phase, depending on the network capacity of the attestors and the bottleneck of data uploading on-chain.

Actually, you can estimate the maximum throughput and then deduce the block time. The block time is also a trade-off. For example, having a 3-second block time for a block size of 10 million is equivalent to having a 30-second block time for a block size of 100 million. If we only consider the block time, you can have a block with only one transaction every few milliseconds, such as what Arbitrum did previously. I’m not sure about its current state. So I think it’s better to consider the size of the block. A better measurement might be Gas/s, which refers to how much Gas you can process per second. This is a more scientific approach compared to throughput and block time. It depends entirely on whether the bottleneck is on-chain or with the attestors. It’s not solely due to the efficiency of centralized sorters; there are many contributing factors.

The second aspect is when everyone becomes decentralized, it also depends on the decentralized solution. If you choose a consensus protocol, it can be divided into permissioned and permissionless. For example, BFT may be faster, but if you want to have the longest chain, it may be slower. So these are choices made by different Rollup platforms based on their own philosophies. Some prioritize faster finality and user experience, while others prioritize decentralization and may sacrifice the advantage of fast block times.

F.F: Understood, thank you Ye. In fact, the block time is determined by the choice of each individual chain. Regarding the cost aspect, we would like to ask as well. Currently, zkSync and Polygon zkEVM have both gone live on the mainnet, and we noticed that their gas fees are actually higher compared to OP Rollup. However, zkSync and Polygon zkEVM might be two different situations. For zkSync, there are too many interactions, leading to an increase in gas prices. For Polygon zkEVM, there are too few interactions, resulting in higher L1 transaction fees being spread out, naturally leading to higher gas fees. Currently, Scroll has very low transaction fees on the testnet. After going live on the mainnet, how do you plan to address these two issues?

Ye: You can go and take a look at our Sepolia blog, where we described many of the optimizations we have made. There are many technical summaries there. After the Goerli testnet, we have done a tremendous amount of optimization. There is a graph on Sepolia’s blog that explains how we compress proofs. Specifically, we have transformed the previous structure of one proof per block into a Batch, Chunk, Block three-layer structure. We can aggregate the block proofs into one proof, and then further aggregate that proof. We have aggregated two major layers and within these layers, there are two more layers to compress and verify proofs. We have done a lot of work to reduce the cost of verification. We are also exploring better algorithms for recursive proofs. Another thing is that we have made significant optimizations by controlling the frequency of block production and the frequency of submitting data to the chain, especially in the area of cross-chain bridges. We have managed to reduce gas costs by 50%, and you can find this information in the blog as well.

In the future, there will be many more directions for optimization. For example, recently we have been researching how to upgrade our cross-chain bridge after EIP4844, using blobs to further reduce our gas fees. We will have a dedicated blog article to introduce this. Additionally, I think there is another crucial aspect. Why is zkEVM still more expensive than OP? It’s because all the ZK projects are focused on ZK and how to make zkEVM perform well. This technology itself is already very complex, so we haven’t reached the stage of deep optimization yet. But OP is different; they have been online for a long time and cost optimization is definitely a concern for them. So, I believe the ZK optimization teams are just starting to work on this. For example, like on-chain data compression, reducing the amount of data stored on the chain, but not moving the data elsewhere. Previously, we had to store the original data on the main chain, but now we can store compressed data that can still be restored. We just need to prove within the ZK circuit that this compressed data is equivalent to the uncompressed data we previously used. Most ZK teams might not have done this yet, but once we do, there will be a significant reduction in cost. So, there are still many opportunities to lower these costs by using ZK-friendly compression algorithms. Currently, everyone is focused on how to make zkEVM perform well and have higher performance, but cost will become a very important topic in the future.

F.F: Got it, thanks Ye. I understand one way to reduce the cost is through aggregate proof, and another way is to further compress the cost through the choices made in the DA layer. I’d like to ask, in terms of solving the cost and speed issues, is there a possibility that Scroll will also adopt an L3 solution like Starknet? The latest funding for Kakarot was written in Cario language for the EVM. Can Scroll also build another EVM, similar to the current L1-L2 architecture or like Kakarot using contracts to write an EVM?

Ye: Regarding the topic of Layer 3, our current focus is on our own development, how to create a user-friendly and complete system, rather than blindly pursuing these floating narratives. Because we believe that if anyone wants to build Layer 3, they can simply fork Scroll and deploy it on Scroll, as our code is open source and forking and deploying is easy. Like I mentioned before, we feel like everyone is telling the story of Layer 3, but supporting Layer 3, especially Layer 3 based on SNARK, requires supporting the precompiled contract LianGuaiiring. But besides us, we haven’t seen any other zkRollup that supports the LianGuaiiring precompiled contract and can verify LianGuaiiring. It’s like missing the most important piece but still telling a further story. We hope to do more things rather than just telling a story.

Actually, you can see which chain has more ZK applications. On our chain, there are many native ZK applications because we support LianGuaiiring. Telling a story is one aspect, but whether you can actually support it is another aspect. In my opinion, Scroll is very easy to support – you can fork, deploy, and verify. Scroll’s Layer 3 is easy to build, and only we can support SNARK-based ZK Layer 3.

Then the third point is that Kakarot is a very different design. Kakarot is an application on Starkware, it is not a Layer 3, and it may develop into a Layer 3 in the future, but for now, it is a Cario-written program using Starknet’s own sorter. This is not like building another Layer 3 on Scroll, but more like writing an EVM in Solidity on Scroll and users send transactions to this EVM to execute.

F.F: Got it, thanks Ye. Another narrative recently is that zkSync has just launched their zk Stack, which supports one-click deployment of L3 and L2. Previously, solutions like Op and Arbitrum have also promoted their own solutions. I’d like to ask, how does Scroll view the RaaS, zkRaaS race now? Will Scroll also release its own solution?

Ye: In my personal opinion, this is indeed not our current focus, and if people want to use the Scroll Stack or SDK, they can simply fork it. We don’t need to come up with a trendy name to make everyone feel like they have to use it. It’s not our current priority, but if everyone wants to use it, it’s easy to use. That’s the first reason.

And personally, I think, for example, OP Stack is the most popular right now. But some of the current debates are that the framework claims to be very flexible, but how flexible is it really? Can it support zero-knowledge proofs, Arbitrum’s proofs? This standard is worth discussing, and it will take a long time to determine what is a good technology stack standard. I believe more in Scroll’s values ​​are built together with the community. If we want to establish such a standard, it must be built through the community. For example, work together with Arbitrum, Optimism, zkSync, and Polygon to push a standard in advance. Only this way can all Layer2’s stay unified. Otherwise, Arbitrum will not change its stack for OP Stack and make it compatible. Each party will still promote their own stack, and they will never be compatible with each other. In this case, the stack is just a fork itself, not a truly flexible framework.

Adding to the fact that OP Stack is an immature framework, it lacks proofs, which is the most crucial part. In this case, a large number of forked chains will consider themselves Layer2, but no one can meet the standard of Layer2 security. Layer2 is used because people believe in the security of Ethereum, but no Layer2 can achieve the same security as Ethereum because no Layer2 has proofs. I think this is being widely promoted, which is not a good thing for the crypto field. Everyone values ​​narration rather than real security. I think our proof system is not yet mature, and we don’t want to push for marketization and attract funding through a lot of marketing methods until we believe the framework is mature enough. This is the second reason why we don’t consider it a priority.

The third reason is that we have operated a complete zkRollup ourselves. We know how complex it is to run such a system. You need to consider the upgradability of contracts, the stability of the sequencer, and your own prover network and model. There are many complex things to consider. I think there are only a few teams capable of running this Stack or applications with the need. We don’t think that the majority of teams are capable of running their own Rollup, nor is it time for them to maintain such a system. If a Layer2 is rug pulled in the future or there are major issues, it is not a good thing for the entire Layer2 space. Overall, promoting the Layer2 Stack is a good direction, but if accidents happen and Layer2 becomes a meme, that is not what we want to see.

The last reason is interoperability. Different applications running on their own chains means their interaction is not that trustless. I think this is a big problem and it splits a system that originally had interoperability. Our current focus at Scroll is to build that default Layer2 and attract the largest network effect, capturing some interoperable and security-demanding use cases. This is currently the most important thing for us, and we don’t rule out the possibility of considering related directions in the future. But for now, it’s a bit early, as we are still examining its requirements and unresolved issues regarding interoperability.

ZK Hardware Acceleration and Verifier Network

F.F: Got it, thanks Ye. From what you’ve said, it seems like Scroll is taking a very pragmatic approach to advancing their Layer2, focusing on solving practical problems rather than following trends. You mentioned earlier that the current focus of Layer2 development is on addressing performance bottlenecks rather than discussing frameworks. We also know that you previously published a groundbreaking paper in the field of hardware acceleration called PipeZK. So, I believe that Scroll should be far ahead of its competitors when it comes to hardware acceleration. What are the latest developments in hardware acceleration for Scroll? Can you reveal any details about the current partnership models and technology upgrades?

Ye: Let me provide some additional background. We are the first team to explore hardware acceleration. In addition to PipeZK, we also have GZKP. We have studied the acceleration of FPGA, ASIC, and GPU, so we have expertise in hardware acceleration. At present, we have many partners, such as Cysic, who are more inclined towards using ASIC to support the Verifier Network. However, we don’t develop IPGA and ASIC ourselves because FPGA and ASIC require specialized teams and skills. Internally, our team focuses on a GPU solution and we are working on writing CUDA code (Note: CUDA® is a parallel computing platform and programming model developed by NVIDIA for general-purpose computing on GPUs) to create a faster version of the GPU verifier. We are a software-focused team, and our intention is to encourage more people to run our GPU code rather than monopolize the market with a powerful ASIC. We want this market to be a fair competition where everyone strives to become the fastest and best performer, whether with ASIC, FPGA, or GPU. We will release a version where users can become our verifier using our GPU algorithm, which has been optimized to be approximately 10 times faster than a CPU verifier. We are continuously working on improving its performance and considering the selection of the next-generation proof system.

As for specific partnership models, we are still exploring. Many hardware companies have already promised to accelerate our Stack, and seeing the community working on this is very heartening for us. *** We will remain neutral and guide them on how to run our benchmark tests and provide them with education and assistance, rather than favoring anyone as the ultimate winner. We are also considering organizing verifier competitions to incentivize faster proof generation. When our mainnet goes live, we will not immediately adopt a fully decentralized verifier network. We believe that the entire system needs to be validated on the mainnet, despite having a testing network running for nearly half a year. We already have several promising proposals to discuss how to decentralize our verifier and sorter. Initially, we may slowly transition towards decentralization through competitions.

F.F: Okay, thank you Ye. Just now I heard that Scroll might focus more on software optimization and collaborate with other companies on hardware aspects. Regarding the decentralized proof-of-stake (PoS) that was mentioned earlier, the community is very concerned about Scroll’s proof-of-stake network. As far as I remember, Scroll was the first to propose decentralizing the proof-of-stake. Because there are many idle GPUs that appeared after the transition from Ethereum’s proof-of-work to proof-of-stake, Scroll’s proof-of-stake network is a great opportunity for both parties. Here, on behalf of the community, I would like to ask, what are the special requirements for participating in Scroll’s proof-of-stake network in terms of GPU, or what needs to be adapted? When can we expect to test it approximately?

Ye: Currently, the requirements for our proof-of-stake validators are one CPU and either two or four GPUs. Actually, our GPU requirements are relatively low, as long as you have an 8GB memory 1080, you can run our validators. However, the CPU requirement is quite high, at least 200GB of CPU memory, so being a validator comes with a higher cost. This is a major issue, and it’s also something we are looking into, whether we can divide the zkEVM blocks into smaller segments and provide segmented proofs, etc.

F.F: If the developers and participants in the community want to test it, when will they be able to test the proof-of-stake?

Ye: Our CPU validators are completely open source, so everyone can run them anytime. After the mainnet is launched, we will continue to optimize our GPU validators and adapt them to a wider range of GPU models. So, I think at that point, everyone can try running the validators. However, if it comes to actually joining the validator network, it will still take some time because you need a system that supports the network. Currently, our entire system is designed to be decentralized, but we need to design specific incentive models and penalty models, etc. So it will take some time to truly integrate into the network. However, if you just want to run it, you can do it now because the validator code is open source.

F.F: Okay, thank you Ye. I’m also looking forward to the launch of the proof-of-stake network. When it comes to the decentralization of the validator network, it actually involves many coordination issues. So, does Scroll have any optional solutions to reveal? We have researched other chains, including Polygon, which is implementing a Proof of Efficiency (POE) solution, allowing permissionless submission of proofs and the fastest validator wins. There are also solutions like Mina and NIL that follow a similar proof market approach. Does Scroll have any innovative solutions in this regard?

Ye: Regarding the decentralization of our network, something that is quite certain is that we will have a Prover Sequencer SeLianGuairation (PSS), similar to Ethereum Layer1’s Proposer Builder SeLianGuairation (PBS). We will ensure that the sequencer and prover are separate roles, but the specific design for both sides will be a long-term issue because if you design the prover first and then the sequencer, you will encounter problems. So, in the long term, the prover may influence the sequencer, and the sequencer may influence the prover. There are many sets of solutions involving things like how much transaction fees go to the sequencer and how much to the prover in the incentive model, but we have not yet determined which set to adopt.

Our current philosophy is inclined towards avoiding the notion that the fastest prover always wins. Because if your system depends on the fastest prover, and another prover in the community finds out they can’t beat the fastest one and goes without rewards for a long time, they might leave. Then the whole system becomes dependent on that fastest prover. When you rely on the fastest prover, it may lose motivation to upgrade its proving power. And when it leaves, your system becomes a single point of failure. So we will try our best to avoid such a design. However, when it comes to specific design, we will gradually make various proposals public and select one from them. We will also discuss and listen to community opinions, but it’s still relatively early.

F.F: I see, from what I’ve heard, the Scroll solution may lean towards avoiding the fastest prover. The hope is that there will be a state of free competition among provers, which would motivate the network to develop and thrive in the long run.

Ye: Exactly, it’s not always about the fastest prover winning.

Proofing System

F.F: In that case, I believe it’s the best approach that can allow more participants from the community to join. Now let’s talk about the proofing system. When people think of ZK, they usually think of two types of proofing systems: STARK and SNARK. And Ye, you’ve mentioned before in your speeches that proofing systems are also becoming modular. Does that mean these classifications no longer apply? When we discuss proofing systems, should we divide them based on the components used in the front and back ends? And when it comes to STARK, it’s no longer exclusively owned by Starkware. I remember STARK is resistant to quantum attacks, but could SNARK also have such a feature in the future?

Ye: Yes, I think the difference between SNARK and STARK is indeed very small. It’s just a difference in one component, the Polynomial Commitment element. STARK has its unique component called FRI. SNARK can also be resistant to quantum attacks now. If you consider that important, there are SNARKs like Plonky2 that are resistant to quantum attacks. If we switch from using the Halo2 proofing system to FRI, it would also be resistant to quantum attacks. So I don’t think it’s a particularly obvious distinction, and I don’t believe resistance to quantum attacks is currently the primary consideration. People use FRI not primarily for resistance to quantum attacks, but mostly because it generates proofs faster, and efficiency may be more important for proofing systems. FRI is indeed an important direction for the future as it can make your proofing process faster, but at the same time, its verification cost is high, so you need to continuously reduce the cost through recursion. That’s also a direction we are exploring.

Proofing systems are indeed highly modular, and what we hope to promote is a community standard where everyone can use the same set of proofing systems. This framework can support FRI, STARK, SNARK, and that’s what we want to see.

F.F: Got it, thanks Ye. We may delve into more details here. Both layers of the Scroll’s current two-layer proof system use Halo2, and what we are curious about is whether there is no optimal choice, but only the most suitable one. Does this mean that Halo2 is the most suitable proof system for zkEVM?

Ye: I don’t think so because Halo 2 is actually a set of code framework. When we use Halo 2, we mostly treat it as a modular framework for proof systems. In Halo 2, you can add KZG and it becomes PLONK, add FRI and it becomes STARK, and you can add various components to create new proof systems. This is an explanation of Halo 2, and for a specific proof system, it requires many other descriptive statements.

For zkEVM, there are many promising directions. One is the traditional approach like what we currently use, which is Halo 2 with KZG. The security model has also been tested over time. Another direction is using Halo 2 with FRI, or using Plonky2 or STARK to implement zkEVM, which is also efficient. According to Polygon’s data, their proof system is also highly efficient, so we are also considering this direction. The main difference is that it does not rely on elliptic curves, which can save a lot of calculations on elliptic curve finite fields, making it very fast. Also, for FRI, they can use Goldilocks (64-bit) small fields to make zkEVM faster.

Another major direction is Folding, like Hypernova, Supernova, LianGuairanova, and various Nova proof systems. In principle, FRI is faster than Plonk mainly because of its different field representation, where finite field elements are smaller and faster. Then the main principle of Folding is that when you need to prove 100 identical programs, if you use other proof systems, you may need to generate 100 proofs or put the 100 programs together and generate one big proof. With Folding, you can fold these 100 programs together at a small cost and only prove the final folded one. This reduces some of the costs for provers, which is why it’s very fast. This direction also has great potential, but there are still many unresolved issues. For example, the lookup table between different programs, including the lack of a mature development framework for building programs based on NOVA. We are still observing whether it is applicable and how efficient it is. I think it is highly efficient for proving many repetitive circuits like Keccak and ECDSA. It’s a good direction, for example, gradually replacing a part with Folding or FRI to replace the performance-critical parts as much as possible. However, there are many issues involved, such as connecting with the remaining parts and considering the overall system’s audit security, and so on.

So I think this is still a direction that needs careful comparison. We have done a lot of benchmarking tests internally on how to build a system that can fairly compare Folding and FRI. We have done a lot of work on this matter and there will be many benchmark test results and articles discussing our conclusions. Why do we think the next generation proof system needs to evolve in this direction.

F.F: Let me follow up on this, is Folding somewhat similar to aggregating layers recursively on the circuit? And when we talked about proof aggregation earlier, we were referring to aggregating at the proof level.

Ye: Yes, it’s a rough idea like that. But it is very different from recursion because it really combines what needs to be proven linearly and only proves it once. On the surface, it seems similar, but there are actually many differences.

Speaking in detail, let me give you a not-so-vivid example. Let’s say you have 100 assignments to write. The traditional way of proving is to write them one by one. Recursion is like having 100 people, each writing one assignment, and finding a way to integrate them together. Folding is more like a person taking a long pen with 100 heads, writing once and completing all 100 assignments. It’s like this person has a lazy idea and compresses the tasks together and only writes once.

In recursion, the workload is not reduced, you still have to write, just find a way to aggregate them together, like stacking the 100 assignments written by 100 people and the teacher only grading once or something.

F.F: So Folding is like using carbon paper, with 100 sheets underneath, and being able to write all the assignments.

Ye: Yes, it gives that kind of feeling, but it’s not that magical, there are still some costs. For example, when you write 100 sheets on carbon paper, the imprint at the bottom will be lighter.

F.F: So we should expect some upgrades to the proof system in Scroll’s future. Speaking of upgrading the proof system, looking at the previous architecture, Geth clients submit the execution traces to the provers, and this part doesn’t seem to be a tightly coupled relationship, so is the proof system also like a component, upgrading the proof system is similar to upgrading a component?

Ye: Yes.

F.F: Another trend is the lookup singularity proposed by Barry Whitehat last year, including Lasso and Jolt recently launched by a16z, which made significant optimizations and upgrades for Lookups. Ye, how do you view this trend?

Ye: I think this is also a very promising direction. Their core idea is to create very large lookup tables. For example, the lookup tables used to be around 2 to the power of 10 or 20. Now they can create lookup tables of 2 to the power of over 100. I think this is a very interesting direction, but it is quite challenging to only use lookup tables to construct circuits. Their idea is to make lookup tables very cheap, so that everyone can use lookup tables to prove various constraints. However, in reality, most of the previous lookup tables like Caulk, Baloo, Cq, can only prove fixed lookup tables and cannot prove dynamic lookup tables. I haven’t specifically looked at whether their new architecture can support dynamic lookup tables. If it can support dynamic lookup tables, it would be a very impressive and powerful design that can be applied to zkEVM. So I think we still need to observe for another month or so to see how much of the circuit can be replaced with lookup tables, and then examine its efficiency. We have already started looking into this direction. Just this morning, we also shared a paper on lookup tables, but we haven’t shared the paper on zkEVM yet. I guess we will have some conclusions of our own in another one or two weeks.

Scroll’s Vision and Values

F.F: Alright, thank you Ye for sharing so many updates on the proof system. Now, the topic of our talk today is Scroll and ZK together. Currently, the expected mainnet launch time for Scroll should be in Q3 and Q4. So, we would like to hear your optimistic expectations. In the foreseeable future, what kind of ideal state do you hope Scroll and ZK can develop to?

Ye: We expect the mainnet to go live in Q3~Q4 (Note: actual launch date 2023.10.17). Our vision is to enable the next billion users to enter the Ethereum ecosystem through Scroll. We have always been committed to open-source development, community collaboration, and maintaining neutrality. We believe that Layer2, as a scalability solution, is not only about technical scalability, inheriting Ethereum’s security, and increasing TPS. The most important thing is to inherit Ethereum’s good qualities, such as decentralization and neutrality. Ethereum doesn’t engage in certain activities or support crazy ideas, but many Layer2 solutions already do, like aggressive marketing campaigns.

Therefore, I hope that Scroll can always uphold its beliefs and values during its future development. We will not do what Ethereum wouldn’t do. We aspire to become the default Ethereum scaling solution. Currently, all Layer2 solutions are moving in a direction that is not fully aligned with Ethereum. They have their own market strategies and goals for launch. However, we want to be one of the few Layer2 solutions that remain highly aligned with Ethereum. We believe this is the only way to attract applications that truly care about security and value Ethereum. We believe that fostering a long-term community can only be achieved through mutual attraction driven by intrinsic values.

As new Layer2 solutions will always emerge, new chains will always come into play, and there will be cycles. Everyone will migrate from one chain to another, striving to become a key player in the chain ecosystem. We hope that even after several cycles, those who currently have faith in Scroll will continue to believe in Scroll. No matter what we do, we will always uphold our values and beliefs, maintain a neutral position, and continue to develop an excellent technical platform. In terms of community development, our focus is on technology rather than promoting Scroll in different places. We aim to bring education and resources to different regions, creating a better atmosphere for discussing technology rather than just spreading the word about Scroll itself.

Furthermore, we aim to become the most trustworthy Layer2 platform. In the future, we will implement various measures to enhance our security, whether through multiple proofs or other solutions, ensuring exceptional security. I believe that the more crucial applications will pay attention to our value, and that’s what really matters to us. We will always engage in long-term thinking, observing what will be most important in the next three to five years, and then focusing our development in that direction instead of following short-term market trends.

In general, my expectation is that during the development process, we can always adhere to our beliefs and our ideas, and develop the ecosystem in the Ethereum way, to build an infinite garden. We won’t promote anyone just because of their good relationship with us, nor will their success be solely attributed to our promotion. We hope everyone can see our commitment to values and consider us the most potential Layer2. People will spontaneously come to deploy with us and become part of our early ecological partners. We definitely value early ecosystem partners, but we will not compromise our neutrality. It is crucial for a protocol, and you cannot be biased towards certain applications, even though it may be against human nature. We also hope that through our technology and the education we provide, through communities like ours, everyone gradually aligns with us Scroll, rather than simply using money to get ahead. In reality, we have rejected many collaboration invitations, which might seem like missed opportunities from the perspective of many chains. However, we believe that although in the short term it is possible to boost market capitalization and increase on-chain activity through various activities, the other approach is to be true to ourselves and uphold our original principles at Scroll. We will always choose the latter, and that is the difference in ideology between us and other Layer2 solutions.

F.F: Thank you very much, Ye. We also fully agree with Scroll’s long-term values, including alignment with Ethereum, decentralization, and prioritizing security. We believe that only strong long-term values can drive Scroll further. We are also eager to see Scroll’s development in the future, both with Ethereum and with ZK. Thank you so much, Ye, for participating in our podcast today.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Blockchain

Coinbase becomes Tezos' largest verification node, will it be a new trend for exchanges?

Original: Cryptopotato , original author: Jordan Lyanchev Source: Odaily Planet Daily, Translator: Yu Shunsui Accordi...

Blockchain

Bloomberg: The currency stability exchange's own stable currency will be issued in "weeks to one or two months"

According to Bloomberg News, Wei Zhou, chief financial officer of Binance, the main cryptocurrency exchange, said in ...

Opinion

a16z evaluates the regulation of Web3 in the United States The regulatory situation is much more optimistic

This article analyzes and rates cases involving Coinbase, Uniswap, ZeroEx, OPYN, and Deridex, and finds that the regu...

Blockchain

Hacker's "honeypot": the exchange has been stolen 1.36 billion US dollars, accounting for 59.2% in 2018 alone

Bitrue, a Singapore-based cryptocurrency exchange, today announced a hacking attack that cost $4.3 million worth of X...

Opinion

Tokyo and Kyoto, the rising encrypted 'twin stars

In an era where technological advancements are shaping the future of economies around the world, Japan is taking a st...

Blockchain

I left the project side and went to the exchange.

In the first article of "Industry Reflection", we briefly reviewed the secondary market conditions of the f...