ABCDE Research Report ZK New Use Cases, in-depth discussion of coprocessors and various solutions

Exploring New Applications for ZK Technology In-Depth Analysis of Coprocessors and Diverse Solutions in the ABCDE Research Report

Authors: Kris & Laobai, ABCDE; Mo Dong, Celer Network

With the popularity of co-processor concepts in recent months, this new ZK use case has received increasing attention.

However, we have found that most people are relatively unfamiliar with the concept of co-processors, especially the precise positioning of co-processors – what they are and what they are not, is still quite vague. There has not yet been a systematic comparison of the technical solutions of several co-processor tracks on the market. This article hopes to provide the market and users with a clearer understanding of the co-processor track.

1. What is a Co-Processor and What It Is Not?

If you were asked to explain what a co-processor is to someone who is not technical or a developer using just one sentence, how would you describe it?

I think Dr. Mo Dong’s statement is probably very close to the standard answer – a co-processor, in plain terms, is the “ability to give smart contracts Dune Analytics”.

How do we break down this statement?

Imagine the scene of us using Dune – you want to go to Uniswap V3 to do some LP to earn some fees, so you open Dune, find the trading volume of various trading pairs on Uniswap, the APR of fees in the past 7 days, the fluctuation range of mainstream trading pairs, etc.

Or when StepN was popular, you started flipping sneakers, unsure when to sell them, so you look at the data on Dune every day – the daily trading volume, the number of new users, the floor price of sneakers… planning to sell quickly once there is a slowdown in growth or a downward trend.

Of course, it’s not just you who’s watching this data, Uniswap and StepN’s development teams are also paying attention to this data.

This data is very meaningful – it can not only help determine changes in trends but also be used to do more interesting things, just like the “big data” approach commonly used by major Internet companies.

For example, recommending similar shoes based on the style and price of shoes that users frequently buy and sell.

For example, launching a “user loyalty reward program” based on the length of time users hold the founding shoes, giving loyal users more airdrops or benefits.

For example, based on the TVL or trading volume provided by LPs or Traders on Uniswap, introducing a VIP program similar to Cex, giving traders a reduction in trading fees or an increase in LP fee shares…

Now, the problem arises – major Internet companies play with big data + AI, it’s basically a black box, they can do whatever they want, and users can’t see it and don’t care.

But in Web3, transparency and trustlessness are our natural political correctness – we refuse black boxes!

So when you want to implement the above scenarios, you face a dilemma – either you achieve it through centralized means, “manually behind the scenes” using Dune to manually collect and deploy this indexed data, or you write a set of smart contracts to automatically fetch this data on-chain, perform calculations, and deploy it automatically.

The former will get you into the trust issue of being “politically incorrect”.

The gas cost generated on the chain by the latter approach will be astronomical, and your (project) wallet cannot afford it.

This is where the co-processor comes in, combining the two methods mentioned earlier. At the same time, the “backstage manual” step is “self-proven innocent” through technical means, in other words, the “index + calculation” outside the chain is “self-proven innocent” through ZK technology, and then fed to the smart contract. This solves the trust issue and eliminates the massive gas cost. Perfect!

Why is it called a “co-processor”? In fact, this is derived from the history of “GPU” in the development of Web 2.0. The reason why GPU was introduced as a separate computing hardware independent of the CPU at that time is because its design architecture can handle some calculations that the CPU fundamentally cannot handle, such as large-scale parallel repetitive calculations and graphics processing, etc. It is precisely because of this “co-processor” architecture that we have amazing CG movies, games, AI models, and so on today. Therefore, this kind of co-processor architecture is actually a leap in the computing system architecture. Now, various co-processor teams also hope to introduce this architecture into Web3.0. Here, the blockchain is similar to the CPU of Web3.0. Whether it is L1 or L2, it is inherently unsuited for this kind of “heavy data” and “complex computing logic” tasks. Therefore, introducing a blockchain co-processor to help handle these calculations greatly expands the possibilities of blockchain applications.

So, let’s summarize what the co-processor does:

  1. Retrieve data from the blockchain and use ZK proofs to prove the authenticity of the retrieved data.
  2. Perform corresponding computations based on the retrieved data and again use ZK proofs to prove the authenticity of the calculated results. The calculated results can then be called by smart contracts in a “low-cost + trustless” manner.

Some time ago, there was a concept popularized by Starkware called Storage Proof, also known as State Proof. Essentially, it focuses on step 1. Many cross-chain bridge technologies based on ZK technology, such as Herodotus and Langrage, also focus on step 1.

The co-processor is nothing more than completing step 1 and then adding step 2, which involves performing trustless computations after extracting data.

So, to describe it more precisely with a relatively technical term, the co-processor should be a superset of Storage Proof/State Proof and a subset of Verifiable Computation.

One important thing to note is that the co-processor is not Rollup.

Technically, the ZK proofs of Rollup are similar to the aforementioned step 2, while the process of step 1 “retrieving data” is directly implemented through a Sequencer, even if it is a decentralized Sequencer, it is obtained through some kind of competition or consensus mechanism, rather than the form of Storage Proof with ZK. More importantly, in addition to the computational layer, ZK Rollup also needs to implement a storage layer similar to the L1 blockchain, which is permanently existing, while ZK Coprocessor is “stateless” and does not need to retain all states after performing computations.

From the application scenario point of view, the coprocessor can be seen as a service plugin for all Layer1/Layer2, while Rollup is a new execution layer that helps scale the settlement layer.

II. Why ZK instead of OP?

After reading the above, you may have a question, why do we have to use ZK for the coprocessor? It sounds a lot like a “Graph with ZK,” and we don’t seem to have any “major doubts” about the results on the Graph.

That being said, it’s because when you use Graph, you’re usually not dealing with real money. These indexes are off-chain services that you see on the front-end user interface, such as transaction volume, transaction history, etc. They can be provided by multiple data index providers like Graph, Alchemy, Zettablock, etc. However, this data cannot be inserted back into the smart contract because once you do, it adds additional trust to this index service. When data is linked to real money, especially large volumes of TVL, this additional trust becomes important. Imagine a friend asking you to borrow $100, you might give it to them without hesitation, but what about when they ask to borrow $10,000 or even $1 million?

But then again, do we really need to use ZK for all the scenarios on the coprocessor? After all, within Rollup, we have both OP and ZK as two technical routes. The recent popularization of ZKML also has the corresponding branch route of OPML. So, when it comes to coprocessors, is there also an OP branch, like OP-Coprocessor?

Actually, there is – but we will keep the specific details confidential for now. We will soon release more detailed information.

III. Which coprocessor is the best – a comparison of several common coprocessor technology solutions on the market

Brevis

The architecture of Brevis consists of three components: zkFabric, zkQueryNet, and zkAggregatorRollup.

Below is an architecture diagram of Brevis:

ABCDE Research Report: ZK New Use Case, In-Depth Discussion of Coprocessors and Various Solutions

zkFabric: Collects block headers from all connected blockchains and generates ZK consensus proofs that validate these block headers. Through zkFabric, Brevis achieves interoperable coprocessors for multiple chains, allowing one blockchain to access arbitrary historical data from another blockchain.

zkQueryNet: An open ZK query engine marketplace that accepts data queries from dApps and processes them. Data queries use verified block headers from zkFabric to process these queries and generate ZK query proofs. These engines have highly specialized functionalities as well as universal query languages to meet different application requirements.

zkAggregatorRollup: A ZK rollup blockchain that serves as the aggregation and storage layer for zkFabric and zkQueryNet. It validates proofs from both components, stores verified data, and submits the zk-validated state root to all connected blockchains.

ZK Fabric plays a key role in generating proofs for block headers, so ensuring the security of this component is crucial. The architecture diagram for zkFabric is shown below:

ABCDE研报:ZK新用例,深入探讨协处理器及各家解决方案

zkFabric is based on Zero-Knowledge Proofs (ZKP), which makes it completely trustless and does not rely on any external validating entity. Its security comes entirely from the underlying blockchain and mathematically reliable proofs.

The zkFabric Prover network implements circuits for the light client protocol of each blockchain, which generate validity proofs for block headers. Provers can utilize accelerators such as GPUs, FPGAs, and ASICs to minimize proof time and cost.

zkFabric relies on security assumptions of the underlying blockchain and cryptographic protocols. However, to ensure the effectiveness of zkFabric, at least one honest relayer is needed to synchronize the correct fork. Therefore, zkFabric uses a decentralized relayer network instead of a single relayer to optimize its effectiveness. This relayer network can leverage existing structures, such as the state guardian networks in the Celer Network.

Prover Allocation: The Prover network is a decentralized ZKP Prover network that requires selecting a Prover for each proof generation task and paying fees to these Provers.

Current Deployment:

Currently implemented as a lightweight client protocol for various blockchains, including Ethereum PoS, Cosmos Tendermint, and BNB Chain, as examples and proofs of concept.

Brevis has partnered with the Uniswap hook, which greatly adds custom Uniswap pools. However, compared to CEXs, Uniswap still lacks effective data processing capabilities to create functionalities dependent on large user transaction data, such as volume-based loyalty programs.

With the help of Brevis, the hook addresses this challenge. The hook can now read from the complete historical chain data of users or LPs and run customizable calculations in a fully trustless manner.

Herodotus

Herodotus is a powerful data access middleware that provides smart contracts with the ability to access current and historical on-chain data across Ethereum layers, as follows:

  • L1 states from L2s
  • L2 states from both L1s and other L2s
  • L3/App-Chain states to L2s and L1s

Herodotus introduces the concept of storage proofs, which combines inclusion proofs (confirming the existence of data) and computation proofs (validating the execution of multi-step workflows) to prove the validity of one or more elements in large datasets, such as the entire Ethereum blockchain or rollup.

The core of blockchain is the database, and the data within it is protected by encryption using data structures such as Merkle trees and Merkle Patricia trees. The uniqueness of these data structures lies in their ability to generate evidence to confirm that the data is contained within the structure once it is securely submitted to them.

The use of Merkle trees and Merkle Patricia trees enhances the security of the Ethereum blockchain. By encrypting and hashing the data at each level of the tree, it becomes almost impossible to alter the data without being detected. Any change to a data point requires a corresponding change to the hash value on the tree, which is publicly visible in the blockchain headers. This fundamental feature of blockchain provides a high level of data integrity and immutability.

Furthermore, these trees enable efficient data verification through the use of inclusion proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there is no need to search the entire Ethereum blockchain; only the paths within the relevant Merkle trees need to be verified.

Herodotus defined storage proofs as a fusion of the following:

  • Inclusion proofs: These proofs confirm the existence of specific data within an encrypted data structure (such as a Merkle tree or Merkle Patricia tree), ensuring that the relevant data does indeed exist in the dataset.
  • Computational proofs: These proofs verify the execution of multi-step workflows, demonstrating the validity of one or more elements within a vast dataset, such as the entire Ethereum blockchain or an aggregation. In addition to indicating the existence of data, they also validate the transformations or operations applied to that data.
  • Zero-knowledge proofs: These proofs simplify the amount of data a smart contract needs to interact with. Zero-knowledge proofs allow smart contracts to confirm the validity of claims without processing all the underlying data.

Workflow

1. Obtain the block hash

Every piece of data on the blockchain belongs to a specific block. The block hash serves as the unique identifier for that block and summarizes all its contents through the block header. In the workflow of storage proofs, the first step is to identify and verify the block hash of the block that contains the data of interest, which is the primary step throughout the entire process.

2. Obtain the block header

Once the relevant block hash is obtained, the next step is to access the block header associated with that block hash. To do this, the block header corresponding to the obtained block hash needs to be hashed. Then, the obtained hash value of the provided block header is compared with the block hash:

There are two ways to obtain the hash:

  1. Using the BLOCKHASH opcode
  2. Querying the Block Hash Accumulator for the hash of previously verified blocks in history

This step ensures that the block header being processed is genuine. After this step is completed, the smart contract can access any value within the block header.

3. Determine the required root (optional)

ABCDE Research Report: New Use Cases for Zero-Knowledge Proofs, In-Depth Exploration of Co-processors and Various Solutions

With the block header, we can delve into its contents, especially:

stateRoot: The encrypted digest of the entire blockchain state at the time the block was created.

receiptsRoot: The encrypted digest of all transaction results (receipts) in the block.

transactionsRoot: The encrypted digest of all transactions that occurred in the block.

These can be decoded to verify if the block contains specific accounts, receipts, or transactions.

4. Validate data based on selected roots (optional)

ABCDE Report: ZK New Use Cases, In-Depth Exploration of Co-Processors and SolutionsWith the selected roots and considering Ethereum’s use of the Merkle-Patricia Trie structure, we can use Merkle proofs to verify the presence of data in the tree. The validation steps will vary based on the depth of the data and data within the block.

Currently supported networks:

  • From Ethereum to Starknet
  • From Ethereum Goerli* to Starknet Goerli*
  • From Ethereum Goerli* to zkSync Era Goerli*

Axiom

Axiom provides a way for developers to query block headers, accounts, or storage values from the entire history of Ethereum. AXIOM introduces a new cryptographic link-based approach. All results returned by Axiom are verified on-chain through zero-knowledge proofs, enabling smart contracts to use them without any additional trust assumptions.

Axiom recently released Halo2-repl, a browser-based REPL written in JavaScript for halo2. This allows developers to write ZK circuits using standard JavaScript, without the need to learn new languages like Rust, install proof libraries, or deal with dependencies.

Axiom consists of two main technical components:

  • AxiomV1 — Ethereum blockchain cache starting from Genesis.
  • AxiomV1Query — Smart contract for executing queries against AxiomV1.

Caching block hash values in AxiomV1:

AxiomV1 smart contract caches Ethereum block hashes in two forms since the genesis block:

Firstly, the Keccak Merkle root of 1024 consecutive block hashes is cached. These Merkle roots are updated through ZK proofs to verify if the block header hash forms one of the most recent 256 blocks directly accessible by EVM or if it exists as a hash of a block already present in the AxiomV1 cache, forming a commitment chain.

Secondly, Axiom stores a Merkle Mountain Range of these Merkle roots starting from the genesis block. This Merkle Mountain Range is built on-chain and is updated by the first part of the cached Keccak Merkle roots.

Performing queries in AxiomV1Query:

AxiomV1Query smart contract is used for batch queries to achieve trustless access to arbitrary data regarding historical Ethereum block headers, accounts, and account storage. The queries can be performed on-chain and completed on-chain through ZK proofs against block hashes cached in AxiomV1.

These ZK proofs check whether the on-chain data is directly located in the block header or in the account or storage Trie of the block, and verify the inclusion (or non-inclusion) proof of the Merkle-Patricia Trie to achieve this.

Nexus

Nexus aims to build a universal platform for verifiable cloud computing using zero-knowledge proofs. It is currently machine architecture-agnostic and supports risc 5/WebAssembly/EVM. Nexus utilizes the proof system of supernova, with the team testing the memory required for generating proofs to be 6GB. In the future, they plan to optimize it so that regular user devices can generate proofs.

Specifically, the architecture is divided into two parts:

  • Nexus Zero: a decentralized verifiable cloud computing network supported by zero-knowledge proofs and the universal zkVM.
  • Nexus: a decentralized verifiable cloud computing network driven by multi-party computation, state machine replication, and a universal WASM virtual machine.

Nexus and Nexus Zero applications can be written in traditional programming languages, currently supporting Rust with plans to support more languages in the future.

Nexus applications run in a decentralized cloud computing network, which is essentially a “serverless blockchain” directly connected to Ethereum. Therefore, Nexus applications do not inherit the security of Ethereum. However, as a trade-off, they can achieve higher computational capabilities (such as computation, storage, and event-driven I/O) due to the smaller network scale. Nexus applications run on a dedicated cloud that achieves internal consensus and provides “proofs” (rather than actual proofs) of verifiable computations through Ethereum’s internal verifiable threshold signatures.

Nexus Zero applications do inherit the security of Ethereum, as they are general-purpose programs with zero-knowledge proofs that can be verified on the BN-254 elliptic curve.

Due to its ability to run any deterministic WASM binary in a replicated environment, Nexus is expected to be used as a source of validity, decentralization, and fault tolerance for proof generation applications, including zk-rollup aggregators, optimistic rollup aggregators, and other provers like the zkVM of Nexus Zero itself.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Bitcoin

ETF game involving bulls'? Cointelegraph's unbelievable blunder on 'Black Monday

At 921 pm on October 16th Beijing time, the official account of cryptocurrency media Cointelegraph suddenly tweeted B...

Opinion

🚀 The SEC Approves Bitcoin ETFs: What It Means for the Crypto Industry 🚀

Exciting Progress in Crypto Adoption SEC Approves Trading for Bitcoin ETFs and Industry Leaders Share Positive Perspe...

Market

Grayscale’s GBTC Sees Significant Outflows, but Other Bitcoin ETFs Offset the Sales

The sale may be largely influenced by Genesis, a crypto lending company, which recently obtained approval from a bank...

Bitcoin

VanEck Bitcoin ETF Sees Record Inflows After Cutting Fees to 0%: What You Need to Know

VanEck has generously waived the management fee for its spot bitcoin ETF for an entire year, or until it reaches an i...

Blockchain

Bakkt three-step test, why are bitcoin ETFs all the way?

In less than two weeks, Bakkt, the world's first compliant bitcoin futures exchange, will be officially launched...

Blockchain

US SEC rejected the latest proposal to create Bitcoin ETF

Tencent Securities on October 10, the Securities and Exchange Commission (sec) again rejected the proposal to create ...