a16z AI combined with blockchain creates four new business models

a16z AI and blockchain create 4 new business models

Author: Dan Boneh (Professor at Stanford University, Senior Research Advisor at a16z crypto), specializing in cryptography, computer security, and machine learning; Ali Yahya (General Partner at a16z crypto), formerly worked at Google Brain and is also one of the core contributors to Google’s machine learning library TensorFlow.

Organized & compiled by: Qianwen, ChainCatcher

What is the relationship between blockchain and artificial intelligence? What kind of world will artificial intelligence lead us to? What are the current status and challenges of artificial intelligence? What role will blockchain play in this process? Stephen King once wrote a science fiction novel called “The Diamond Age”, in which there is an artificial intelligence device that serves as people’s mentor throughout their lives. When you are born, you are paired with an artificial intelligence that knows you very well – understands your preferences, follows you throughout your life, helps you make decisions, and guides you in the right direction. This sounds great, but you definitely don’t want this technology to fall into the hands of middleman giants. Because this would give the company a lot of control and raise a series of privacy and sovereignty issues.

We hope that this technology can truly belong to each individual, so a vision emerges – that is, you can use blockchain to achieve this. You can embed artificial intelligence in smart contracts. With the power of zero-knowledge proofs, you can maintain the privacy of data. In the next few decades, this technology will become smarter and smarter. You can choose to do anything you want or change it in any way you wish.

So what is the relationship between blockchain and artificial intelligence? What kind of world will artificial intelligence lead us to? What are the current status and challenges of artificial intelligence? What role will blockchain play in this process?

AI and Blockchain: Mutual Counterbalance

The development of artificial intelligence, including the scenario described in “The Diamond Age”, has always existed, but it has recently experienced a leap in development.

First of all, artificial intelligence is largely a top-down, centralized control technology. Cryptography, on the other hand, is a bottom-up, decentralized cooperation technology. In many ways, cryptocurrencies are a study of how to build decentralized systems that can achieve large-scale cooperation among humans without true centralized control. From this perspective, it is a natural way for these two technologies to come together.

Artificial intelligence is a sustainable innovation that strengthens the business models of existing technology companies and helps them make top-down decisions. The best example of this is Google, which can determine the content presented to users among billions of users and billions of page views. Cryptocurrencies, on the other hand, are fundamentally disruptive innovations, and their business models are fundamentally different from those of large tech companies. Therefore, this is a movement led by rebels from the sidelines, rather than a movement led by the authorities.

Therefore, artificial intelligence may be closely related to privacy protection, and the two interact and promote each other. As a technology, artificial intelligence has established various incentive mechanisms, resulting in less and less privacy for users because companies want to access all of our data. And artificial intelligence models trained with more and more data will become more effective. On the other hand, artificial intelligence is not perfect, and models may have biases that can lead to unfair results. Therefore, there are also many papers on algorithm fairness at this stage.

I believe that we will embark on a path of artificial intelligence, where everyone’s data will be aggregated into these large-scale model training processes to optimize the models. Cryptocurrencies, on the other hand, are evolving in the opposite direction, increasing personal privacy and empowering users to control data sovereignty. It can be said that encryption technology is a technology that counteracts artificial intelligence, as it helps us distinguish between content created by humans or artificial intelligence. In a world flooded with content created by artificial intelligence, encryption technology will become an important tool for maintaining and preserving human content.

Cryptocurrency is like the Wild West because it has no authority and anyone can participate. You have to assume that some of the participants are malicious. Therefore, there is now a greater need for tools to help you filter honest participants from dishonest ones, and machine learning and artificial intelligence can be very beneficial in this regard.

For example, there are projects that use machine learning to identify suspicious transactions submitted to wallets. These transactions can then be flagged and submitted to the blockchain. This can effectively prevent users from accidentally submitting all their funds to attackers or doing something they will regret later. Machine learning can also be used as a tool to help you predict in advance which transactions may have MEV.

Just as LLM models can be used to detect false data or malicious activities, conversely, these models can also be used to generate false data. The most typical example is deepfake. You can create a video of someone saying something they have never said. But blockchain can actually help mitigate this problem.

For example, there are timestamps on the blockchain that show on what date you said certain things. If someone falsifies a video, you can use timestamps to deny it. All this data, the truly authentic data, is recorded on the blockchain and can be used to prove that the deepfake video is indeed fake. So I think blockchain may help combat forgery.

We can also rely on trusted hardware to achieve this. Cameras and devices like our phones will sign the images and videos they capture as a standard. This is called C2LianGuai, which stipulates how cameras sign data. In fact, now there is a Sony camera that can take photos and videos and generate C2LianGuai signatures on the videos. This is a complex topic that we won’t go into detail here.

Usually, newspapers do not publish photos as they are taken by cameras. They will crop and authorize the photos. Once you start editing the photos, it means that the recipients, the final readers, and the users on the browser will not see the original photos, and the C2LianGuai signature verification cannot be performed.

The question is, how can users confirm that the image they see is indeed signed correctly by the C2LianGuai camera? This is where ZK technology comes in. You can prove that the edited image is actually the result of downsampling and grayscale scaling of the correctly signed image. In this way, we can replace the C2LianGuai signature with a simple zk proof and correspond it to these images one by one. Now, readers can still confirm that what they see is the real image. Therefore, zk technology can be used to counter this information.

How does blockchain break the deadlock?

Artificial intelligence is essentially a centralized technology. It benefits greatly from economies of scale, as running on a single data center makes things more efficient. In addition, data, machine learning models, machine learning talents, etc. are usually controlled by a few technology companies.

So how do we break the deadlock? Cryptocurrencies can help us achieve decentralization of artificial intelligence by using technologies such as ZKML, which can be applied to data centers, databases, and machine learning models themselves. For example, in terms of computation, using zero-knowledge proofs, users can prove that the actual process of reasoning or training models is correct.

In this way, you can outsource this process to a large community. Under this distributed process, anyone with a GPU can contribute computing power to the network and train models in this way, without relying on a large data center that centralizes all GPUs.

From an economic perspective, it is uncertain whether this makes sense. But at least with the right incentives, the long tail effect can be achieved. You can leverage all possible GPU capabilities. Let all these people contribute computing power to model training or reasoning, which can replace large technology companies that control everything. To achieve this, various important technical issues must be addressed. In fact, there is a company called NVIDIA that is building a decentralized GPU computing market primarily for training machine learning models. In this market, anyone can contribute their own GPU computing power. On the other hand, anyone can use any computing power existing in the network to train their large-scale machine learning models. This will be an alternative to centralized big technology companies such as OpenAI, Google, and Metadata.

Imagine this scenario: Alice has a model she wants to protect. She wants to send the model to Bob in encrypted form. Bob now receives the encrypted model and needs to run his own data on this encrypted model. How can this be done? This requires the use of so-called fully homomorphic encryption to perform encrypted computations on encrypted data. If a user has an encrypted model and plaintext data, then they can run the encrypted model on the plaintext data, receive and obtain encrypted results. You send the encrypted results back to Alice, and she can decrypt and see the plaintext results.

This is actually a technology that already exists. The question is, can we scale it up to larger models? This is a fairly big challenge that requires more companies’ efforts.

Current Situation, Challenges, and Incentive Mechanisms

I think achieving decentralization in computing is important. The first issue is the validation problem. You can use ZK to solve this problem, but currently these technologies can only handle smaller models. The challenge we face is that the performance of these cryptographic primitives is far from sufficient for training or inference on super large models. There is a lot of work being done to improve the performance of the proof process so that larger workloads can be efficiently proven.

Meanwhile, some companies are also using other technologies, not just encryption technologies. They are using game theory-based technologies that allow more independent individuals to work together. This is an optimistic approach that does not rely on cryptography, but it still aligns with the larger goal of decentralized artificial intelligence or helping to create an artificial intelligence ecosystem. This is a goal proposed by companies like OpenAI.

The second major issue is the problem of distributed systems. For example, how do you coordinate a large community to contribute to a network in a way that makes it feel integrated and unified? There are many challenges involved, such as how to decompose the workload of machine learning in a sensible way and distribute different workloads to different nodes in the network, as well as how to efficiently complete all of this work.

Current technologies can basically be applied to medium-sized models, but not to models as large as GPT-3 or GPT-4. Of course, we have other methods. For example, we can have multiple people train models and then compare the results, which creates a game-theoretic incentive mechanism. It incentivizes people not to cheat. If someone cheats, others may complain that their computed training results are incorrect. As a result, the cheater will not receive rewards.

We can also decentralize data sources in the community to train large-scale machine learning models. Similarly, we can collect all the data and train the models ourselves instead of relying on a centralized institution. This can be accomplished by creating a marketplace. This is similar to the computing marketplace we described earlier.

We can also view it from an incentive perspective, encouraging people to contribute new data to a large dataset for training models. The difficulties involved are similar to the validation challenges. You must somehow validate that the data people contribute is indeed good data, not duplicate data, randomly generated junk data, or unrealistically generated data.

In addition, it is important to ensure that the data does not undermine the model in any way, otherwise the performance of the model will actually deteriorate over time. Perhaps we need a combination of technical and social solutions, in which case you can also establish credibility through some site metrics that community members can obtain, so that the data they contribute becomes more trustworthy.

Otherwise, it would take a very long time to truly achieve coverage of data distribution. One of the major challenges of machine learning is that the model can only cover the distribution range achieved by the training dataset. If there are inputs far beyond the distribution range of the training data, then your model may behave completely unpredictably. In order for the model to perform well in edge cases, black swan data points, or data inputs that may be encountered in the real world, we need a dataset that is as comprehensive as possible.

Therefore, if you have an open, decentralized market that provides data for datasets, you can allow anyone in the world who has unique data to provide this data to the network, which is a better way. Because if you try to do this as a central company, you have no way of knowing who owns this data. Therefore, if you can create an incentive mechanism for these individuals to come forward and provide this data voluntarily, then I believe you can actually achieve significantly better coverage of the long tail data.

So we must have some mechanism to ensure that the data you provide is authentic. One approach is to rely on trusted hardware, embedding some trusted hardware in the sensors themselves, and we only trust data that is correctly signed by the hardware. Otherwise, we must have other mechanisms to distinguish the authenticity of the data.

There are currently two important trends in machine learning. First, the performance measurement methods of machine learning models are constantly improving, but they are still in the early stages, and it is actually difficult to judge the performance of another model. Another trend is that we are becoming better at understanding how the model works.

Therefore, based on these two points, at some point, I may be able to understand the impact of the dataset on the performance of machine learning models. If we can understand whether the dataset contributed by a third party contributes to the performance of the machine learning model, then we can reward this contribution and create incentives for the existence of this market.

Imagine if you could create an open market where people contribute trained models to solve specific types of problems, or if you created a smart contract that embeds some kind of test, and if someone could use zkml to provide a model and prove that the model can solve the test, this would be a solution. You now have the tools needed to create a market where people are incentivized to contribute machine learning models that can solve certain problems.

How does AI intersect with encryption to form a business model?

I believe the vision behind the intersection of cryptocurrency and artificial intelligence is that you can create a set of protocols to distribute the value obtained from this new technology of artificial intelligence to more people, where everyone can contribute and share the benefits brought by this new technology.

Therefore, the people who can profit are those who contribute computing power, contribute data, or contribute new machine learning models to the network, so that better machine learning models can be trained to solve more important problems.

The network demand side can also benefit. They use this network as the infrastructure to train their machine learning models. Perhaps their models can contribute some interesting things, such as the next generation of chat tools. In these scenarios, these companies will be able to derive value because they have their own business models.

The creators of this network will also profit. For example, creating a token for the network and distributing it to the community. All these people will have collective ownership of this decentralized network, which can be used for computing data and models, and also to capture some value from all economic activities conducted on this network.

You can imagine that for every transaction conducted on this network, every payment method for computation fees, data fees, or model fees may incur certain charges, which will go into a treasury controlled by the entire network. Token holders collectively own this network. This is essentially the business model of the network itself.

AI Enhances Code Security

Many of you may have heard of co-pilots, a tool used to generate code. You can try using these collaborative generation tools to write solidity contracts or cryptographic code. However, I want to emphasize that doing so is actually very risky. Because many times, when you try to run it, these systems actually generate code that can run but is not secure.

In fact, we recently wrote a paper on this issue, which pointed out that if you try to get a co-pilot to write a simple encryption function, it will provide the correct encryption function. But it uses an incorrect operation mode, so you end up with an insecure encryption mode.

You may wonder why this is the case? One reason is that these models are essentially trained based on existing code, and they are trained in GitHub repositories. Many GitHub repositories are actually vulnerable to various attacks. Therefore, the code learned by these models can work properly but is not secure. It’s like producing garbage with low quality inputs. Therefore, I hope people will be cautious and carefully check the code when using these generated models to generate code, to ensure that it really completes what it should do and does it securely.

You can use AI models combined with other tools to generate code, ensuring that the entire process is error-free. For example, one idea is to use LLM models to generate specifications for formal verification systems, requiring LLM to generate a specification for a formal verification tool. Then, ask the same LLM instance to generate a program that complies with the specification, and use a formal verification tool to check if the program truly complies with the specification. If any vulnerabilities appear, the tool will also capture them. These errors can be fed back to LLM as feedback, and ideally, LLM should be able to modify its work and generate another correct version of the code.

Finally, if you repeat the process, you will eventually get a piece of code that, ideally, fully satisfies the specification and can be formally verified to satisfy the specification. And because humans can read this trace, you can see from this trace that this is the program I want to write. In fact, many people have been trying to evaluate LLM’s ability to find software vulnerabilities, such as uniting smart contracts, C, and C plus.

So, will we reach a point where code generated by LLM is less likely to contain bugs than code generated by humans? For example, when we talk about autonomous driving, are we concerned that it is less likely to crash than human drivers? I think this trend will only become stronger, and the integration of artificial intelligence technology into existing toolchains will become more and more advanced.

You can integrate it into formal verification toolchains, and you can also integrate it into other tools, such as the tools mentioned earlier that check for memory management issues. You can also integrate it into unit testing and integration testing toolchains, so that LLM is not just operating in a vacuum. It can receive real-time feedback from other tools, connecting it to the real world.

I believe that by combining the use of large-scale machine learning models trained on all the data in the world, along with these other tools, it may make computational programs better than human programmers. Even though they will still make mistakes, they may be superhuman. This will be an important moment in software engineering.

Artificial Intelligence and Social Graph

Another possibility is that we may be able to build decentralized social networks that actually behave like microblogs, but the social graph is completely on the chain. It’s almost like a public good that anyone can build on top of. As a user, you can control your identity on the social graph. You can control your data, control who you follow, and who can follow you. Additionally, there are a large number of companies building gateways in the social graph, providing users with experiences similar to Twitter, Instagram, TikTok, or any other experiences they want to create.

But all of this is built on the same social graph, owned by no one and not completely controlled by a tech company worth billions of dollars.

This is an exciting world because it means it can be more vibrant, with an ecosystem built collectively by people. Each user can have more control over what they see and do on the platform.

But at the same time, users also need to filter out the noise. For example, there is a need for reasonable recommendation algorithms to filter all the content and show you the news sources you really want to see. This will open a door for the entire market, a competitive environment made up of participants who provide services. You can use algorithms, AI-based algorithms, to curate content for you. As a user, you can decide whether to use a specific algorithm, perhaps one created by Twitter or another algorithm. But at the same time, you also need tools like “machine learning” to help you filter out the noise, help you parse all the junk information, as in this world, generative models can create all the junk information in the world.

Why is human proof important?

A very relevant question is, in a world where AI-generated content is rampant, how do you prove that you are indeed human?

Biometric technology is a possible direction, and one project is called World Coin, which uses retinal scanning as biometric information to verify if you are a real person, to ensure that you are indeed a living person and not just a photo of an eye. This system has secure hardware that is difficult to tamper with, so it is difficult to forge the zero-knowledge proof that masks your actual biometric information, which appears on the other end.

On the internet, no one knows if you are a robot. Therefore, I think this is exactly where the project of human proof becomes very important, because it will be crucial to know whether you are interacting with a robot or a human. If you don’t have human evidence, then you cannot determine whether an address belongs to an individual or a group of people, or whether ten thousand addresses truly belong to one person or are just pretending to be ten thousand different people.

This is crucial in governance. If every participant in a governance system can prove that they are actually human and can prove it in a unique way because they only have one set of eyeballs, then the governance system will be more fair and not as plutocratic (based on the preference of the maximum amount locked in a smart contract).

Artificial Intelligence and Art

Artificial intelligence models mean that we will live in a world of unlimited media richness, where communities around any specific media or narratives around specific media will become increasingly important.

For example, Sound.xyz is building a decentralized music streaming platform that allows artists and musicians to upload music and directly connect with our community by selling NFTs to them. For example, you can comment on a track on the sound dot xyz website, and other people who play the song can see the comments. This is similar to the previous SoundCloud feature. The act of purchasing NFTs also supports artists, helping them achieve sustainable development and create more music. But the beauty of all this is that it actually provides artists with a platform to truly interact with the community. Artists belong to everyone.

Because of the role of cryptocurrency here, you can create a community around a piece of music, and if a piece of music is only created by a machine learning model without any human elements, then this community would not exist.

Many of the music we will encounter will be fully generated by artificial intelligence, and tools to build communities, tell stories around art, music, and other types of media will be very important, differentiating the media that we truly care about, want to invest in, and spend time engaging with from the general media.

There may be some synergies between these two, for example, many music will be enhanced or generated by artificial intelligence. But if there are also human elements involved, for example, creators use AI tools to create a new piece of music, they have their own vocal characteristics, their own artist pages, their own community, and their own followers.

Now, there is a synergistic effect between these two worlds, and we have the best music because artificial intelligence has given us super abilities. But at the same time, we also have human elements and stories, which are coordinated and realized through encryption technology, allowing you to bring all these people together on one platform.

In terms of content generation, this is definitely a brand new world. So how do we differentiate between human-generated art and machine-generated art that needs to be supported?

This actually opens a door for collective art, art that is generated through the creative process of the entire community, rather than individual artists. There are already some projects doing this, where the community influences the blockchain through voting processes to generate art based on prompts from machine learning models. Perhaps you are not generating one piece of art, but ten thousand. Then you use another machine learning model, which is trained based on feedback from the community, to select the best one from these ten thousand works.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Market

Galaxy Digital Founder: Bitcoin ETF Will Become SEC's "Stamp of Approval"

The founder of Galaxy Digital believes that the approval of a bitcoin ETF for spot trading is essentially a recogniti...

Market

Wu's Weekly Selection Tornado Cash Co-founder Arrested, HashKey to Open Retail Investors Next Week, and Top 10 News (0819-0825)

Author | Wu Shuo Blockchain Weekly News Top 101. The US government arrests the co-founder of Tornado Cash and include...

Bitcoin

October Mining News by Wu Shenma releases new mining machine, El Salvador's first mining pool, Bitmain launches Aleo mining machine, and more.

Author | Wu talks about Block chain 1. Bitfarms announced the mining of 411 Bitcoins in September 2023, with a 7.3% i...

Market

Conversation with Galaxy Digital Potential Impact of Spot Bitcoin ETF on the Market

The launch of a spot Bitcoin ETF will enable wealth management advisors who are restricted to offer clients Bitcoin i...