Full Text of Gavin Wood’s Speech: How Polkadot is Shifting from “Chain-Centric” to “App-Centric”

Gavin Wood's Speech: Polkadot's Shift from "Chain-Centric" to "App-Centric" # Introduction Hi, I'm Gavin Wood, founder of Polkadot. Today, I'll discuss our move from a "chain-centric" to an "app-centric" blockchain. # The Problem with Chain-Centric Design Polkadot was originally designed as a "chain-centric" blockchain, which had limitations. The chains are rigid and inflexible, making them difficult to customize without modifying the chain itself. # The Solution: App-Centric Design We've shifted our focus to an "app-centric" design, making Polkadot more flexible and customizable. Our goal is to enable developers to create interoperable applications on Polkadot. # Benefits of App-Centric Design The app-centric approach allows developers to build tailored, efficient, secure, and scalable applications. It also lets developers create powerful and useful applications that can interact with other apps on Polkadot. # Conclusion Our shift to an app-centric design is crucial for Polkadot's growth. We believe that a flexible and customizable platform will foster a thriving ecosystem of applications on Polkadot. Thanks for listening, and we're excited about Polkadot's future!

Author | Gavin Wood

On June 28th, the annual flagship event of Polkadot, Polkadot Decoded, was held in Copenhagen, Denmark, where Web3 enthusiasts, builders, and investors from around the world gathered to discuss the latest developments in the Polkadot ecosystem.

The most surprising part of this conference was the appearance of Gavin Wood, the founder of Polkadot, as a special guest, who brought some very heavy insights.

Gavin shared the future development direction of Polkadot and proposed a new perspective on Polkadot: focusing on the lower-level resources needed for blockchain – the computing core (nucleus) – and viewing Polkadot as a multi-core computer, rather than being limited to the existing parallel chains and relay chains.

And Gavin proposed that Polkadot may cancel the existing slot auction method in the future and adopt a more flexible resource allocation method centered on the “nucleus”, such as “bulk purchase” and “instant purchase” of the “nucleus” on a monthly basis.

The following text is compiled by PolkaWorld from Gavin’s speech content.

Polkadot 1.0

Polkadot can currently be called Polkadot 1.0.

At this stage, the functions of Polkadot are already complete, and all the functions mentioned in the white paper 7 years ago have been implemented, and the code library of Polkadot 1.0 is about to be released.

So what is Polkadot 1.0? In the original white paper, I wrote “Polkadot is a scalable heterogeneous multi-chain.” That is to say, it is a blockchain, but it has a unique consensus mechanism “BABE”, which can provide security for other blockchains (parallel chains).

To put it artistically, it looks something like this:

The middle is the relay chain, which is responsible for Crowdloan, Auction, balance management, staking, governance, etc. It is a relay chain with many functions. The small dots on the side are parallel chains, and the relay chain also guarantees the security of the parallel chains. Moreover, these parallel chains can communicate with each other.

So what is the product form provided by Polkadot? It is in the form of a slot, with a lease period of six months, and the longest slot usage period can be obtained two years in advance, plus the Crowdloan mechanism. But other than that, there is no other way to use Polkadot. In Polkadot 1.0, the only product is parallel chain slots.

A new perspective on Polkadot: a multi-core computer

This famous saying tells us that if one wants to truly understand this world, changing their perspective is crucial, even more important than going to a bigger world.

Therefore, here we will change our perspective and re-understand what Polkadot is.

The concepts of parallel chains and relay chains are good, and they are also the way I and many others understood Polkadot early on, and what we were building towards.

But over time, we found that what we were doing was actually quite different from what we originally envisioned. Sometimes, if you’re lucky, or if your team is strong, you may end up doing something cooler than what you initially set out to do.

In computer science, abstraction and generalization are important. Later, we found that the degree of abstraction and generalization we applied to Polkadot was far higher than we had imagined.

So what is the new perspective on Polkadot?

Polkadot is a multi-core computer

Firstly, what we are doing is not about chains, but about space and the underlying resources required by chains.

Secondly, Polkadot is a platform for builders to create applications for users to use. Essentially, it is not a platform for hosting blockchains. Chains happen to be one way to make Polkadot useful, but it may not be the only way.

Finally, it is very resilient. I think this is a more neutral term than UnstopBlockingble, meaning that it can resist any attempt to force it to do something it was not originally intended to do, in other words, it can resist distortions of the original intent.

So overall, Polkadot is a resilient, general, and continuous computing provider. Continuous computing means that – it is not that you have a job, you do it, and it’s over; what we want to do is a long-term task, even if it pauses in the middle, it can continue to persist. It’s a bit like the vision of a “world computer” in 2015 and 2016.

So what is Polkadot from this perspective? It is a multi-core computer, with multiple cores running simultaneously, doing different things. Then we will find that the blockchain running on one core is a parallel chain. The parallel chain runs continuously on a reserved core. Now we use this new paradigm to understand parallel chains.

What is the “Polkadot supercomputer”

So let’s take a closer look at this “Polkadot computer”.

The “Polkadot supercomputer” is multi-core and more powerful than ordinary computers. It has about 50 cores, which run continuously and in parallel.

According to our prediction model, in a few years, after a lot of benchmark testing and optimization, the number of cores in the later stage can be increased to 500-1000.

The performance of each “core”

Let’s take a look at each “core”.

These cores are similar to CPU cores. It has many characteristics and attributes, which you can describe. Essentially, it is something that does calculations, similar to CPU cores.

  • Bandwidth, which is the total amount of data in and out of the core, is about 1 MB/s.

  • The underlying computing power, that is, how much calculation can it do? In the case of Geekbench 5, it is about 380.

  • Latency, that is, the interval time between two continuous works, is about 6 seconds.

With the passage of time and the advancement of hardware, these indicators will also be improved to a certain extent.

In the past, the only way these cores could be useful was through parallel chains. However, there are other ways to use cores to make them more universal and available to everyone.

Polkadot needs a more flexible allocation method

What does this mean?

The core is actually very flexible. It is not only able to process a fixed task forever, but it can easily switch what it is doing, just like a CPU can switch tasks. Since the core is very flexible, the procurement of the core should also be flexible.

However, the slot auction model is not flexible enough. It was designed based on the original paradigm of Polkadot—a long-running single chain. Later, we had parallel threads as a supplement, but it was only a small step towards the correct paradigm.

And this model sets a high entry barrier for the Polkadot ecosystem. If you are like me, someone who likes to tinker with various technologies, for example, I don’t want to mess around with fundraising and marketing stuff, I just want to deploy the code and see if it can run. But under the current model, I think we miss out on many potential collaborators like this.

A possible future—Flexible Polkadot

Below I will propose a possible future plan, which can be called the “Flexible Polkadot”.

We can abandon the lease and slot model and instead view Polkadot as some “cores”. The time on these cores, which we now call “core time,” but was previously referred to as “block space,” can be sold regularly, and everyone can buy and use core time.

My proposal is this. For the sale of native core time in Polkadot (primary market), it can be divided into two ways: bulk purchase and instant purchase.

Bulk Purchase is conducted once a month, and a purchase can be used for 4 weeks.

Instant Purchase is a bit like the instant payment model of parallel threads, which is purchased on demand. The cost of using Polkadot, or more precisely, the cost of using the core of Polkadot, will be determined by the market. There may be multiple cores available on the market, or there may not be. That’s the way the market is. For instant use, it will be a continuous sale of core time.

That is to say, we maximize the flexibility and leave the rest to the market.

Bulk Purchase

Let us further understand how bulk purchase works. However, this is not the final proposal, but a version proposed for discussion.

It is sold every four weeks, and each time it is sold at a fixed price for four weeks of core time. Everyone will pay the same price.

So, what do you do with these times after you get them?

  • You can assign them all to a parallel chain, which is the current situation, except that it is not carried out month by month, but each chain occupies a core exclusively.

  • Multiple parallel chains can be specified to share and rotate the use of a core.

  • Can be put on the spot market.

  • It can also be sold separately after being divided, and may be done through a separate parallel chain using NFT XCM.

Rental control in bulk purchases

So, if you want to lock a core for a long time, of course you need to predict the price trend.

I suggest setting such a rule. When a new bulk core time is allocated, the broker records the price and who it is allocated to as a backup. In the next month, this person can purchase at a limited price (a price increase limit will be set).

What does this mean for existing parallel chains?

  • Existing parallel chain leases will remain unchanged. For example, if you have won two years of slots, it will continue.

  • The pricing of bulk purchases is determined by governance.

  • I personally think that we should start at a relatively low price to reduce the threshold for participation.

  • For those who have set floor prices, rental controls, and priority transfer rights to ensure long-term price guarantees. We currently only guarantee the use time for up to two years, but in theory, it can be renewed indefinitely in the future.

In addition, parallel chains will have more flexible block times.

Currently, the block time of parallel chains is fixed, probably 12 seconds, and will be reduced to 6 seconds after further optimization. In the future, I think the block time of parallel chains will be more flexible.

Parallel chains will have a “base speed”. For example, a parallel chain shares a core with another one or several parallel chains, and a block is generated every 12 or 18 seconds. But if higher throughput is required, you can go to the spot market or purchase more core time through OTC on some enterprise chains.

Core time can also be compressed (by sacrificing bandwidth to reduce latency). Compressing multiple parallel chain blocks into one relay chain core can reduce latency but it will add some bandwidth cost, as you have to pay for the start and end of a block.

Core time can also be combined (by adding additional cores to increase performance and reduce latency). You can have two cores in the same time period to obtain two complete parallel chain blocks. This can reduce the block time from 12 seconds to 6 seconds or even to 3 seconds.

All of the above is meaningful for existing parallel chains:

  • Get more transaction bandwidth when you need it

  • Lower cost when you don’t need it

  • Can be a high-performance multi-core chain

  • Can be a periodically running chain

  • Can be a pure pay-as-you-go chain

  • Can be a low-latency chain (e.g. one block per second)

  • Can plan long-term capital expenditures

So how can cores be used? Core time can be split and recombined.

Simple use of cores

This picture shows the current simple use of core time. From left to right, time goes forward gradually. Each row corresponds to a core on Polkadot. Currently, each of the five parallel chains occupies one core.

In fact, it doesn’t matter which core each chain is assigned to. Parallel chains can run on any available core without affecting performance, and these cores do not have any special affinity for a particular chain.

Flexible use of cores

Flexible core usage is also called exotic scheduling.

Intervals can be divided

Intervals can be divided, and the owner of the interval can divide and trade it. A parallel chain can run for a period of time and then stop processing its own transactions, allowing another parallel chain to run.

We see the light blue parallel chain, which stops for a while and then continues. The green chain is the same.

Can cross intervals

Multiple chains can take turns running on a core to share costs. For example, you may use 2/3 of the time, and another chain may use 1/3 of the time, such as the light blue and yellow chains in the figure.

Core compression is possible

The same core can process multiple blocks at the same time. Verify multiple blocks on one core to achieve higher block rates and lower performance latency.

Cores can be combined

Use multiple cores to obtain more powerful computing power, which can be instantaneous or long-term.

The same BlockingraID and the same “task” can be assigned to multiple cores at the same time. It can use two cores to process two blocks in this period. For example, the orange here has a core that is fixed to use, but there is also another core that is intermittently used.

Possible future direction: multiple chains share one core

Two to three chains can share the same core at the same time to reduce costs without reducing latency. This is a more speculative use.

Possible future direction: mix and match the above methods

In theory, all of the above methods are composable, and mixing and matching them will result in an extremely flexible and universal computing resource.

Chain-centered → Application-centered

Polkadot 1.0 is a chain-centered paradigm: isolated chains can send messages to each other, which is essentially similar to a single chain plus cross-chain bridge, except that the parallel chain is connected to the relay chain.

This leads to a fragmented user experience. Users may use a certain application on one chain, but they also want to use this application on another chain, that is, to use the application in a multi-chain way.

However, if we have a chain-centered paradigm, we will also have a chain-centered user experience. And if an application is not chain-centered, then everything will become difficult.

In reality, if we want to fully utilize the potential of Polkadot, the application needs to be deployed across chains, and it needs to be seamless across chains, at least for users, and ideally for developers as well.

This is an artistic schematic diagram of what Polkadot looks like:

In order to quickly launch Polkadot, we chose to put many of Polkadot’s application capabilities on the relay chain. But this is actually a trade-off.

The advantage is that we can deliver many features in a short amount of time before the technical foundation is fully completed, such as great staking, governance, tokens, and identity systems.

But it also has a cost. If we tie many things to one chain, some problems will arise. For example, the relay chain cannot always use its resources for its own job—ensuring network security and message transmission. And it induces everyone to form a chain-centered mindset.

In the past, we could only focus on one chain, and when going live, we put all of Polkadot’s features on the relay chain. That was our earliest goal. Unfortunately, the relevant tools have not kept up with this era in which both applications and users are cross-chain.

Now, system-level functions are turning to the paradigm of cross-chain deployment. System chains are more common, and the relay chain handles less and less. Applications need to be able to cross these chains, and they cannot make the user experience difficult as a result.

This is a schematic diagram I just drew half an hour ago. This is a better perspective for me to understand “what Polkadot is”.

Polkadot is not actually the relay chain in the middle, and the parallel chains are around it, at least not for those who come to the Polkadot ecosystem. In fact, Polkadot should be an integrated system, a computer that runs many applications.

Yes, there are boundaries between the business logic components of different chains (i.e. parallel chains), but this may not be as important to users as we think. What is more important is that users can do what they want to do, and do it easily, clearly, and quickly.

The circular dots on the graph are applications, and the dashed lines that divide the dots are “Blockingras”. I don’t want to call it a parallel chain, because that would lead us into the thinking trap of “each parallel chain corresponds to a nucleus”. This is the pattern of Polkadot so far, but it is not the only choice.

These dots should be able to communicate with each other normally and almost as easily as they do with the space within the dashed lines.

XCM

How is this achieved? This is where XCM comes in.

XCM is a language, and the transport layer that actually transmits messages is called XCMP. I admit that these two names are a bit confusing.

What is XCM for? Its function is to abstract common functions in the chain. It creates a descriptive language to describe what you want to do or what you want to happen.

As long as the chain honestly translates this message, everything is fine. Unfortunately, it cannot be guaranteed that the chain will honestly translate your XCM message. In a trustless environment, XCM is not ideal.

For example, in trade, we would say that XCMP, as a means of transportation, gives us a secure trading channel and we will not be robbed halfway. What is sent can be ensured to be received. However, it did not give us a framework for creating binding terms between different trading entities.

Let’s take a more intuitive example – the European Union. What is it? Essentially, it is an alliance. You can join it. It is a treaty framework that allows different sovereign states to abide by specific treaties. However, it is not perfect, because although there is a common judicial department that can translate the laws of each country and ensure that it complies with the law, it cannot prevent a country from changing its laws and making them inconsistent with the requirements of the European Union.

In Polkadot, we also face similar problems. XCM is a language for expressing intent. WebAssembly is a language for expressing the laws that parallel chains must abide by in Polkadot. You can think of it as the European Court of Justice (ECJ), which ensures that parallel chains comply with their proposed logic, but this does not mean that this logic cannot be legally changed by parallel chains to refuse to comply with the XCM language.

XCM is a language for expressing intents, such as “I’m going to transfer assets” or “I’m going to vote.” This is not a problem when it comes to trusted systems chains. However, there is a problem when they are between different governance processes and legislative procedures. We can do better in the Polkadot ecosystem.

Accord

Here I propose a new term called Accord. Accord is a voluntary contract that crosses multiple chains. It’s like saying “I voluntarily agree to follow this business logic, and anything I do will not change that.” The chain itself cannot disrupt the logic of the contract.

Polkadot ensures faithful execution of this logic. The Accord will be specific to a particular function. Any chain that joins the Accord must follow the rules, which will be specific to that particular function.

To ensure a lower barrier to entry, the proposal of the Accord is not licensed. Because it is voluntarily joined, it will not affect anyone until it is passed and registered.

This diagram is not the most accurate, but it gives you a general idea. The outer circle is Polkadot, and there are some small dots inside it. We put this diagram horizontally. So the Accord is a separate mechanism that governs its locality.

Accord cannot exist in all systems. As far as I know, Polkadot is the only system that can support its existence because it is the only system with the same level of security and can provide specific state transition functions for each shard. These features allow Polkadot to achieve a collaborative mode that cannot be achieved in other architectures (such as cross-chain bridges).

Those who are familiar with Polkadot may have heard of “SPREE”, which is the technology that can implement Accord.

Some scenarios for using Accord

Let’s look at some possible scenarios for Accord.

One of them is the asset hub .

Currently, if two chains want to interact with assets, they must go through a third chain, namely the asset hub chain. If one of the chains is a local asset chain, it will be slightly different. But theoretically, if two unrelated chains want to trade third-party assets, you must go the extra mile to establish a path.

With Accord, you don’t have to do that. You can think of it as an embassy that exists in a generic process space, scheduled on the same core as the parallel chain at the same time, but it is not part of the parallel chain business logic, but exists separately. This is a bit like an embassy having its own country’s laws, but its geographical location is in the host country. Similarly, Accord is external business logic, but it is recognized by everyone and exists locally.

Another example is the Multicast XCM Router. It can send a message that crosses multiple chains, and it can also be done in a certain order. For example, do one operation here and another operation there, but always with my permission. This is currently not possible.

Another example is the decentralized exchange, which can set up a forward station on multiple different chains so that exchanges can occur directly locally without the need for bidirectional channels.

These are just a few examples I can think of for now, and I believe that the potential of this technology will be further developed in the future.

Project CAPI

Let’s talk briefly about the user interface – Project CAPI. Its purpose is to allow Polkadot applications that span multiple chains to have smooth, user-friendly interfaces, even when using a light client.

Hermit Relay

That is, all user-level functions in the relay chain are transferred to the system chain. For example:

  • Balance

  • Staking

  • Governance and Identity

  • Leasing of cores

Finally, let Polkadot’s functions span multiple parallel chains and free up space in the relay chain.

Building a Resilient Application Platform

Finally, I want to reiterate what we are doing and why we are doing it. It’s all about resilience.

The world is always changing, but it is important to respect these intentions if people have clear intentions. The systems we use now are not resilient enough, and they are built on very old-fashioned ideas.

When your system does not have cryptography or game theory, some bad things will happen. For example, in this news article, a large-scale network attack caused the information of 6 million people to be leaked, which is one in a thousand people in the world. And these things happen frequently.

So how do we build a system that is not threatened by these threats? The first thing, of course, is to build a decentralized, cryptographic, and game-theory-proof system. But what exactly do we need to do?

Although we advocate “decentralization” every day, if everything has to go through the same RPC provider, it cannot be truly decentralized.

Decentralization requires multiple factors to provide:

  • Use of light clients: Smoldot and CAPI will allow high-performance, lightweight-client-based UIs

  • ZK primitives: Build a feature-rich, high-performance ZK primitive library. The first library is almost complete and will provide privacy protection for on-chain groups, including Fellowship.

  • Sassafras Consensus: A new consensus algorithm without sub-blocks. Improves security and randomness, with high-performance transaction routing. Improves the performance and user experience of parallel chains, encrypted transactions prevent front-running, and potential MEV revenue.

  • Hybrid network/onion routing: Avoid leaking IP information about transactions. A general messaging system between users, chains, and OCWs.

  • Human Decentralization: Introduce many and diverse people into the system. Encourage participation through governance, treasury spending, wages, subsidies, etc., and absorb and maintain collective knowledge.

Remember the original intention

Finally, I would like to reiterate our original intention. Polkadot does not exist to create a specific application, but to provide a platform that provides a way to deploy multiple applications in the environment and to enable applications to use each other’s functions to improve the well-being of users. And we must ensure that this vision can be realized as soon as possible, which is the mission of Polkadot.

If Polkadot cannot maintain a certain degree of resilience to changes in the world, then building Polkadot will be meaningless. These changes can be other ways to achieve the same goal, or threats from external organizations that are tired of the untrustworthy world.

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Blockchain

Casio will release virtual G-SHOCK NFT on Polygon.

Casio is collaborating with Polygon Labs to launch virtual G-SHOCK watches on the Polygon blockchain.

News

Cartridge A new player in the gaming industry, advancing together with infrastructure and game development.

Cartridge is a team composed of native blockchain developers - world-class engineers and designers - our mission is t...

Blockchain

Will modularization become the ultimate solution for cross-chain?

Whether it's cross-chain asset or data/information cross-chain, the challenge for cross-chain aggregators is how to b...

Opinion

Deep Experience Report: How to Layout Web3 Social Protocol Lens Protocol in Advance?

Web3 social protocol Lens Protocol has recently received investment from FTX Ventures. BlockingNews will explain how ...

Market

Bull market signal? How much longer will Bitcoin stay at $30,000?

The current price of Bitcoin is at the "midpoint" of the 2021-2022 cycle at $30,000 USD. Multiple indicators for Bitc...

Blockchain

Annual summary of NFT weekly report 丨 combinability of NFT and virtual time and space set

Satoshi Masamoto: 5 daily high-quality articles on cryptocurrencies Today's content includes: 1 NFT composabilit...