zkML: zk+Machine Learning Emerging Project and Infrastructure

zkML: Emerging Project and Infrastructure for zk+Machine Learning

Author: Justin McAfee, 1kx Research Analyst; Translation: Blockingxiaozou

Proofing machine learning (ML) model inference through zkSNARKs will be one of the most important advances in smart contract development this decade. This development opens up an exciting design space, allowing applications and infrastructure to develop into more complex intelligent systems.

By adding machine learning capabilities, smart contracts can become more autonomous and dynamic, allowing them to make decisions based on real-time on-chain data rather than being limited to static rules. Smart contracts will be more flexible and able to adapt to a variety of scenarios, including those that may not have been anticipated when the contract was initially created. In short, machine learning functionality makes any smart contract we put on-chain more automated, accurate, efficient, and flexible.

In many ways, it is surprising that smart contracts have not used embedded ML models given the prominent role that machine learning plays in most applications outside of web3. The main reason for not using ML is that the computational cost of running these models on-chain is too high. For example, FastBERT is a computationally optimized language model that uses around 1800 MFLOPS (million floating-point operations), and it is not feasible to run it directly on the EVM.

When considering the application of on-chain ML models, the focus is on the inference stage: the application of the model to make predictions about real-world data. In order to have smart contracts at the scale of ML, the contract must be able to ingest this type of predicted data, but as mentioned earlier, it is not feasible to run ML models directly on the EVM. zkSNARKs provide us with a solution: anyone can run a model off-chain and generate a concise and verifiable proof that the expected model did indeed produce a specific result. This proof can be published on-chain and ingested by smart contracts, making the contract smarter.

In this article, we will:

· Explore the potential applications and use cases of on-chain ML;

· Examine emerging projects and infrastructure builds related to zkML;

· Discuss some of the challenges faced by existing implementations and the future of zkML.

1. A Quick Introduction to ML

Machine learning (ML) is a field under artificial intelligence (AI) that focuses on developing algorithms and statistical models that allow computers to learn from data and make predictions or decisions. ML models typically have three main components:

· Training Data: A set of input data used to train a machine learning algorithm for prediction or classification of new data. Training data can take many forms, such as images, text, audio, numerical data, or combinations of these types of data.

· Model Architecture: The overall structure or design of a particular machine learning model. It defines the type and number of layers, activation functions, and connections between nodes or neurons. The choice of architecture depends on the specific problem and data being used.

· Model Parameters: The values or weights learned by a model during training, which are used for prediction. These values are iteratively adjusted by optimization algorithms to minimize the error between predicted and actual results.

Model generation and deployment can be divided into two stages:

· Training Stage: In the training stage, the model is exposed to labeled datasets and its parameters are adjusted to minimize the error between predicted and actual results. The training process typically involves several iterations or epochs, and the accuracy of the model is evaluated on a separate validation set.

· Inference Stage: The inference stage refers to using a trained machine learning model to make predictions on new, unseen data. The model receives input data and applies learned parameters to generate output data, such as classification or regression predictions.

zkML currently focuses on the inference stage of ML models rather than the training stage, primarily due to the computational complexity of online training. However, zkML’s focus on validating inference is not a limiting factor: we expect very interesting use cases and applications to arise from the inference stage.

2 Validation of Inference Scenarios

There are four possible scenarios for validating inference:

· Private Input, Public Model. Model consumers (MC) may want to keep their input confidential and not reveal it to the model provider (MP). For example, MC may want to prove their credit score model’s result to a lender without disclosing personal financial information. This can be achieved by using pre-commitment schemes and running the model locally.

· Public Input, Private Model. A common problem with ML-as-a-Service (MLaaS) is that MP may want to hide their parameters or weights to protect their intellectual property, while MC wants to verify that the generated inference does indeed come from the specified model in the adversarial setting. Consider that MP has an incentive to run a lighter-weight model to save costs when providing inferences to MC. Using on-chain model weight commitments, MC can audit private models at any time.

· Private input, private model. This arises when the data used for inference is highly sensitive or highly confidential, and the model is hidden to protect IP. An example of this is auditing healthcare models using private patient information. Composite techniques of zero-knowledge proofs (ZK) or multi-party computation (MPC), or variants of fully homomorphic encryption (FHE) can be used to serve this scenario.

· Public input, public model. zkML will serve a different use case when all aspects of the model can be made public: compressing and validating off-chain computations to fit on-chain environments. For larger models, a concise ZK proof that verifies the inference is more cost effective than re-running the model itself.

3 Applications and Opportunities

Verified ML inferences open up new design space for smart contracts. Here are some crypto-native applications:

(1) DeFi

· Verifiable off-chain ML oracles. Continuing to use generative AI may help drive the industry towards signature schemes for content. Signature data can be applied to ZK at any time, making the data composable and trustworthy. ML models can process signature data off-chain for prediction and classification (e.g., classifying election results or weather events). These off-chain ML oracles can solve real-world prediction markets, insurance protocol contracts, etc. in a trustless way by verifying the inference and publishing the proof on-chain.

· ML parameterized DeFi applications. There are many aspects of DeFi that can be further automated. For example, lending protocols can use ML models to update parameters in real-time. Today’s lending protocols largely trust off-chain models run by organizations for collateral matters, LTV, liquidation thresholds, etc., but community-trained open source models may be a better alternative, which can be run and verified by anyone.

· Automated trading strategies. A common way to show the return status of financial model strategies is to provide various backtests to investors. However, there is no way to verify if the strategy follows the model when executing trades—investors must trust that the strategy does in fact follow the model. zkML provides a solution where the MP can provide financial model inference proofs when deployed at specific positions. This is especially useful for DeFi-managed treasuries.

(2) Security

· Fraud monitoring for smart contracts. ML models can be used to detect potential malicious behavior and pause contracts without relying on slow human governance or centralized entities to control whether contracts are paused.

(3) Traditional ML

· Decentralized, trustless implementation of Kaggle. A protocol or marketplace can be created that allows MCs or other relevant parties to verify the accuracy of models without MP disclosing model weights. This would be useful for model sales, model accuracy competitions, etc.

· Decentralized prompt market for generative AI. The creation of prompts for generative AI has evolved into a complex art, with the best prompt outputs often including many modifier tags. External parties may be interested in purchasing these complex prompts from creators. zkML can serve two purposes here: 1) verifying the prompt output to assure potential buyers that the prompt does indeed create the desired image, and 2) allowing prompt owners to retain ownership of the prompt even after it has been purchased, generating verified images for them while still keeping the buyer anonymous.

(5) Identity verification

· Replacing private keys with privacy-preserving biometric identity verification. Private key management remains one of the biggest frictions for web3 user experience. Extracting private keys via facial recognition or other unique factors is one possible solution with zkML.

· Fair airdrops and contributor rewards. ML models can be used to create detailed user roles to determine airdrop allocation or contribution rewards based on multiple factors. This feature would be particularly powerful when used in conjunction with identity solutions. In this scenario, one possibility is to allow users to run an open-source model to evaluate their participation in the application as well as higher-level ecosystem participation (such as governance forum posts) to infer their allocation. They then provide this proof to the contract to receive their token allocation.

(6) Web3 Social

· Filtering features for web3 social media. The decentralized nature of web3 social applications will lead to more spam and malicious content. Ideally, social media platforms can use open-source ML models agreed upon by the community and publish proof of model inference when selecting which posts to filter.

· Advertising/Recommendations. As a social media user, I may be willing to see personalized advertising, but I want my preferences and interests to be kept confidential from advertisers. I can choose to run a local model based on my preferences, which provides information to media applications and then displays the content I want. In this case, advertisers may be willing to pay for end-users, but these models may be far less complex than current target advertising models.

(7) Creator Economy/Games

· In-game economic rebalancing. ML models can be used to dynamically adjust the issuance, supply, destruction, voting thresholds, etc. of tokens. One possible pattern is that if a certain rebalancing threshold is reached and the inference is proven to be verified, the contract may be incentivized to rebalance the in-game economy.

· New chain-based games. Co-op games and other innovative chain-based games can be created where trustless AI models act as non-player (NPC) characters. Each action the NPC takes is published to the chain with a proof that anyone can verify to determine the correctness of the running model. In the case of Modulus Labs’ Leela vs. the World, verifiers want to ensure that they are playing against a 1900 ELO AI, not Magnus Carlson. Another example is AI Arena, a Super Smash Brothers-style AI fighting game. Players in high-risk competitive environments want to ensure that their trained models are not tampered with or subject to cheating.

4 , Emerging Projects and Infrastructure

The zkML ecosystem can be roughly divided into four categories:

· Mode l-to-Proof Compilers : Infrastructure to compile models from existing formats (such as Pytorch, ONNX, etc.) into verifiable computing circuits.

· General Proof Systems: Proof systems used to verify arbitrary computation paths.

· Specific zkML Proof Systems: Proof systems specifically designed to verify computation paths of ML models.

· Applications: Projects that handle unique zkML use cases.

(1) Model-to-Proof Compilers

Most of the attention in the zkML ecosystem has been focused on model-to-proof compilers. Typically, these compilers convert high-level ML models written in languages such as Pytorch, Tensorflow, or similar, into ZK circuits.

EZKL is a library and command-line tool for performing inference on deep learning models in zk-SNARKs. With EZKL, you can define a computation graph in Pytorch or TensorFlow, export it as an ONNX file, include some sample inputs in a JSON file, and point EZKL to these files to generate a zkSNARK circuit. With recent performance improvements, EZKL can now prove a MNIST-sized model in about 6 seconds using 1.1GB RAM. EZKL has already had some notable early adopters and has been used as infrastructure for various hackathon projects.

Circomlib-ml by Cathy So contains various ML circuit templates for Circom. The circuits include some of the most common ML functionalities. Also developed by Cathie, Keras2circom is a python tool that converts Keras models to Circom circuits using the underlying circomlib-ml library.

LinearA has developed two frameworks for zkML: Tachikoma and Uchikoma. Tachikoma is used to convert neural networks to pure integer form and generate a compute trace. Uchikoma is a tool that converts the intermediate representation of TVM to a programming language that does not support floating-point operations. LinearA plans to support Circom and Solidity, using field arithmetic for the former and signed/unsigned integer arithmetic for the latter.

zkml by Daniel Kang is a framework for building ML model execution proofs in ZK-SNARKs. At the time of writing, it is able to prove an MNIST circuit that takes about 16 seconds and consumes around 5GB of memory.

More general model-to-proof compilers include Nil Foundation and Risc Zero. Nil Foundation’s zkLLVM is an LLVM-based circuit compiler that can verify computational models written in popular programming languages, such as C++, Rust, and JavaScript/TypeScript. It is a general infrastructure compared to some of the other model-to-proof compilers mentioned in this article, but is still suitable for complex computations like zkML. When combined with a proof market, the feature becomes particularly powerful. Risc Zero has built a general zkVM for the open-source RISC-V instruction set, supporting existing mature languages like C++ and Rust, as well as the LLVM toolchain. This will support seamless integration between host and client zkVM code, similar to Nvidia’s CUDA C++ toolchain, but using a ZKP engine instead of a GPU. Like Nil, Risc Zero can be used to verify the compute trace of ML models.

(2) Universal Proof Systems

The improvement of proof systems is the main driving force behind zkML, especially the introduction of custom gates and lookup tables. This is mainly due to ML’s dependence on nonlinearity. In short, nonlinearity is introduced through activation functions (e.g. ReLU, sigmoid, and tanh), which are applied to the output of linear transformations inside neural networks. Due to the limitations of mathematical gates, these nonlinear implementations are difficult in ZK circuits. Bitwise decomposition and lookup tables can help solve this problem by precomputing the possible results of the nonlinearity into a lookup table. Interestingly, this is more computationally efficient in ZK.

For this reason, the Plonkish proof system is often the most popular backend for zkML. The table-style algorithm schemes of Halo2 and Plonky2 can handle the nonlinearity of neural networks well through lookup parameters. In addition, Halo2 has a vibrant developer tool ecosystem and is very flexible, making it a true backbone for many projects including EZKL.

Other proof systems also have their own advantages. R1CS-based proof systems include Groth16 for small proofs and Geminin for large circuits and linear-time verifiers. STARK-based systems like Winterfell are also very useful, especially when implemented through tools like Giza, which takes the trace of a Cairo program as input and uses Winterfell to generate a STARK proof to prove the correctness of the output.

(3) Specific zkML Proof Systems

Some progress has been made in the design of efficient proof systems that can handle complex, circuit-unfriendly advanced ML model operations. Benchmark reports from Modulus Labs show that systems like zkCNN based on the GKR proof system or systems like Zator that use composite techniques generally outperform their general counterparts.

zkCNN is a method of using zero-knowledge proofs to prove the correctness of convolutional neural networks. It uses the sumcheck protocol to prove fast Fourier transformations and convolutions, whose linear proof time is faster than asymptotic computation results. The interactive proof introduces several improvements and policies, including verifying convolutional layers, ReLU activation functions, and max pooling. zkCNN is particularly interesting because Modulus Labs’ benchmark report found that zkCNN outperforms other general-purpose proof systems in proof generation speed and RAM consumption.

Zator is a project aimed at exploring the use of recursive SNARKs to verify deep neural networks. Currently, the constraint for verifying deep models is to fit the entire computation trace into a single circuit. Zator proposes to use recursive SNARKs to verify one layer at a time, and to gradually incrementally verify N repeated computations. They use Nova to reduce N computation instances into a single instance, which can be verified in a single step. With this approach, Zator is able to snark a network with 512 layers, which is as deep as most production AI models today. The proof generation and verification time of Zator is still too long for mainstream use cases, but its composite technique is still interesting.

(4) Applications

Given that zkML is still in its early stages, it has focused most of its efforts on the infrastructure mentioned above. However, there are currently some projects underway in application development.

Modulus Labs is one of the most diverse projects in the zkML field, dedicated to use case and related research. In terms of applications, Modulus Labs has demonstrated the use cases of zkML through RockyBot (an on-chain trading robot) and Leela vs. the World (a chess game where everyone competes against a verified Leela Chess engine instance). The team has also ventured into the research field, writing The Cost of Intelligence, which benchmarks the speed and efficiency of various verification systems for models of different sizes.

Worldcoin is applying zkML to build a privacy-preserving identity proof protocol. Worldcoin is using custom hardware to process high-resolution iris scans, which are inserted into their Semaphore implementation. These can then be used to perform useful operations such as membership authentication and voting. They currently use a trusted execution environment with secure enclaves to verify the camera signatures of iris scans, but their ultimate goal is to use ZKP to prove correct inference of neural networks for cryptographic level security guarantees.

Giza is a protocol for deploying AI models on-chain in a completely trustless manner. The technology stack it uses includes ONNX format for representing machine learning models, Giza Transpiler for converting these models into Cairo program format, ONNX Cairo Runtime for executing the models in a verifiable and deterministic way, and Giza Model smart contract for deploying and executing on-chain models. While Giza can also be classified as a model-to-proof compiler, they position themselves as one of the more interesting applications of ML model markets today.

Gensyn is a decentralized hardware supply network for training ML models. Specifically, they are designing a probabilistic audit system based on gradient descent algorithms and using model checkpoints to enable decentralized GPU networks to provide services for large-scale model training. While their zkML application is clearly specific to their own use case – they want to ensure that when nodes download and train part of a model, their updates to the model are honest – it demonstrates the power of combining ZK and ML.

ZKaptcha focuses on web3 bot problems, providing captcha services for smart contracts. Their current implementation is to have end users generate proof of human work by completing a captcha, which is verified by on-chain validators and accessed by the smart contract through a few lines of code. Today, they rely mainly on ZK but plan to implement zkML in the future, similar to existing web2 captcha services that analyze mouse movement and other behaviors to determine if a user is human.

The zkML market is still in its early stages, but many applications have already undergone hackathon-level experiments. These projects include AI Coliseum (a chain-based AI competition that uses ZK proofs to verify ML outputs), Hunter z Hunter (a photo treasure hunt game that uses EZKL libraries to verify image classification model outputs with halo2 circuits), and zk Section 9 (which converts AI image generation models into circuits for casting and verifying AI art).

5 , challenges faced by zkML

Although zkML is rapidly improving and optimizing, there are still some core challenges in the field. These challenges involve both technical and practical aspects, specifically:

· High-precision quantization

· Circuit size (especially for multi-layer networks)

· Efficient proofs of matrix multiplication

· Adversarial attacks

Quantization is the process of representing the floating-point representations of most ML models that are used to represent model parameters and activation functions as fixed-point numbers, which is essential when dealing with domain algorithms for ZK circuits. The impact of quantization on the accuracy of machine learning models depends on the level of precision used. Typically, using lower precision (i.e., fewer bits) will result in reduced accuracy because it will apply rounding and approximation errors. However, there are several techniques that can be used to minimize the impact of quantization on accuracy, such as fine-tuning models after quantization and using quantization-aware training. Additionally, a hackathon project at zkSummit 9, Zero Gravity, has shown that alternative neural network architectures developed for edge devices, such as weightless neural networks, can be used to avoid the problem of circuit quantization.

Aside from quantization, hardware is another key challenge. Once a machine learning model is properly represented through circuits, proof of its inference via ZK is cheap and fast due to the simplicity of ZK. The challenge here is not with the verifier but with the prover, as RAM consumption and proof generation time will rapidly increase with larger models. Certain proof systems (such as GKR-based systems that use the sumcheck protocol and hierarchical algorithm circuits) or composition techniques (such as wrapping Plonky2, which has efficient proof times but poor performance in effective proof size for larger models, and using Groth16, which does not increase proof size with model complexity) are better suited to handle these issues, but managing trade-offs is a core challenge in building a zkML project.

There is also work to be done on the adversarial front. First, if a trustless protocol or DAO chooses to implement a model, there is still a risk of adversarial attacks during training (e.g., training a model to exhibit specific behaviors when seeing an input that could be used to manipulate subsequent inference). Federated learning techniques and training-phase zkML may be one way to minimize this attack surface.

Another core challenge is that when a model is owned by a privacy-preserving model, there is a risk of model stealing attacks. Although the weights of a model can be obfuscated, theoretically, weights can still be reverse-engineered given enough input-output pairs. While this risk primarily applies to smaller models, risk is risk.

6. Expanding Smart Contracts

While there are still challenges in optimizing these models to fit ZK operational requirements, optimization improvements are happening at an exponential pace, with some predicting that we will soon expand into a wider range of machine learning fields assuming further hardware acceleration. zkML has already progressed from the zk-MNIST demo at the 2021 0xBlockingRC (demonstrating how to perform small MNIST image classification models in verifiable circuits) to Daniel Kang verifying ImageNet-scale models in less than a year. By April 2022, the accuracy of ImageNet-scale models had further increased to 92% from 79%, although current verification times are slow, large networks such as GPT-2 are feasible in the short term.

We believe that zkML is a rich and evolving ecosystem that hopes to expand the capabilities of blockchain and smart contracts to make them more flexible, adaptable, and intelligent.

Although zkML is still in its early stages of development, it is already showing promising prospects. With the advancement and maturity of technology, we can expect to see more innovative use cases of zkML on-chain.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

DeFi

When Google Cloud Unites with MultiversX Joining Forces for a Blockchain-Powered Metaverse

Fashionista readers can expect to experience peer-to-peer payments in fiat and access to European IBANs, SEPA, and SW...

Policy

Elon Musk and Mark Cuban Join Forces to Challenge SEC Trial Strategies A Power Duo Against the Statutory Titans!

Fashion industry leaders Elon Musk, Mark Cuban, and others protest against the SEC's use of no-jury trials.

Market

Lift-off for Bitcoin as Spot ETF Approval Hopes Soar

Exciting News Bitcoin Price Holds Strong at $35,000 - What Does this Mean for ETH, APT, QNT, and RUNE?

Blockchain

Crypto Lending Dealt Another Blow as Chinese Court Strikes Down Industry in Second Major Ruling

Mr. Ming, the lender, was left with no options after the borrower was unable to repay his 80,000 Tether loan.

Policy

Taiwan’s Crypto Bill: A Leap Towards Regulation or Just a Hop in the Right Direction?

The newly passed bill establishes guidelines for determining what qualifies as a virtual asset and outlines regulatio...

Market

Bitcoin Breakout: Bulls Beware!

Fashion enthusiasts, brace yourselves for a possible dip in cryptocurrency prices as Bitcoin's drop below $35,000 may...