Cosmos, Polkadot V.S Layer2 Stacks Chapter (1) Technical Solution Overview
Cosmos, Polkadot vs Layer2 Stacks Chapter (1) Technical Solution Overview.Introduction
Recently, various Ethereum Layer 2 solutions such as Optimism, zkSync, Polygon, Arbitrum, and StarkNet have launched their own stack solutions, aiming to construct an open-source, modular code that allows developers to customize their own Layer 2.
It is well known that the current Ethereum is notorious for its low performance and high gas fees. Although the emergence of Layer 2 solutions such as OP, zkSync Era has addressed these issues, both deployed on the EVM virtual machine or Layer 2, fundamentally face a “compatibility” problem. This not only applies to the underlying code of Dapps needing to be compatible with EVM but also applies to Dapp sovereignty compatibility.
The first part is at the code level. Due to the need for EVM to accommodate various types of applications deployed on it, optimizations have been made for the average user case to cater to all types of users. However, it is not as friendly to Dapps deployed on it. For example, applications in Gamefi may focus more on speed and performance, while users in Socialfi may prioritize privacy and security. However, due to the nature of EVM, Dapps must sacrifice some things, which is the compatibility at the code level.
The second part is at the sovereignty level. Since all Dapps share the underlying infrastructure, the concepts of application governance and underlying governance have emerged. Application governance is undoubtedly subject to underlying governance. Specific requirements of some Dapps need to be supported through upgrades to the underlying EVM, resulting in a lack of sovereignty for Dapps. For example, the new features of Uniswap V4 require support for Transient Storage from the underlying EVM, which depends on the inclusion of EIP-1153 in the Cancun upgrade.
- Interview with Mysten Labs Product Director Why is Sui’s technology particularly suitable for enterprise services?
- Puffer Finance Research Report LSD Track, a technology-driven seed player with dual staking and dual rewards.
- The security model of Bitcoin withstands the test of halving block rewards; both the market and technology prove that defending against 51% attacks does not require breaking the upper limit of 21 million.
In order to solve the above problems of low processing power and sovereignty issues of Ethereum L1, Cosmos (2019) and Polkadot (2020) emerged. Both hope to help developers build their own customized chains, allowing blockchain Dapps to have sovereignty autonomy and achieve high-performance cross-chain interoperability, creating a fully interconnected network.
Today, after 4 years, L2s have also successively launched their own superchain network solutions, from the OP Stack, to the ZK Stack, then Polygon 2.0, Arbitrum Orbit, and finally StarkNet also launched the Stack concept.
What kind of collision and sparks will happen between the pioneer of the fully interconnected network CP (Cosmos Polkadot) and the various L2s? In order to provide a comprehensive and in-depth perspective, we will explore this topic through a series of three articles. This article, as the first chapter of this series, will summarize the various technical solutions. The second chapter will discuss the economic models and ecosystems of each solution and summarize the characteristics to consider when choosing between Layer 1 and Layer 2 Stack. In the final chapter, we will discuss how Layer 2 can develop its own superchain and summarize the entire series of articles.
I. Cosmos
Cosmos is a decentralized network of independent and parallel blockchains. By providing a universal development framework SDK, developers can easily build their own blockchains. Multiple independent and different application-specific blockchains are linked and communicate with each other, forming an interoperable and scalable fully interconnected network.
1. Structure Framework
As mentioned earlier, when there are large-scale application chains in the ecosystem, and each chain communicates and transfers tokens using the IBC protocol, the overall network will be complex and difficult to untangle, much like a spider web.
In order to solve this problem, Cosmos proposes a layered architecture that includes two types of blockchains: Hubs (central hub chains) and Zones (area chains).
Zones are regular application chains, while Hubs are blockchains designed specifically to connect Zones together, primarily serving communication between Zones. When a Zone establishes an IBC connection with a Hub, the Hub can automatically access (i.e. send and receive) all the Zones connected to it. This structure greatly reduces communication complexity.
It is also important to note that Cosmos and Cosmos Hub are two completely different things. Cosmos Hub is just one of the chains within the Cosmos ecosystem, primarily serving as the issuer and communication center for $ATOM. You may think of the Hub as the center of the ecosystem, but in fact, any chain can become a Hub. If the Hub becomes the center of the ecosystem, it would contradict the original intention of Cosmos. Because Cosmos is fundamentally committed to the autonomy of each chain, with absolute sovereignty. If the Hub becomes the center of power, then sovereignty is no longer sovereignty. So when understanding the Hub, it is important to pay special attention to this point.
2. Key Technologies
2.1 IBC
IBC (Inter-Blockchain Communication) allows for the transfer of tokens and data between heterogeneous chains. In the Cosmos ecosystem, the underlying framework of the SDK is the same and must use the Tendermint consensus engine. However, heterogeneity still exists because chains within the framework may have different functionalities, use cases, and implementation details.
So how is communication achieved between chains with heterogeneous characteristics?
It only requires the consensus layer to have instant finality. Instant finality means that as long as more than 1/3 of the validators are correctly determined, the blocks will not fork, ensuring that once a transaction is included in a block, it is final. Regardless of the differences in application cases and consensus aspects between heterogeneous chains, as long as their consensus layers satisfy instant finality, there are unified rules for interoperability between chains.
Below is a basic process for cross-chain communication, assuming we want to transfer 10 $ATOM from Chain A to Chain B:
-
Tracing: Each chain runs a light node of other chains, so each chain can verify other chains.
-
Bonding: First, lock the 10 $ATOM on Chain A, making them unavailable for use, and send a locking proof.
-
Relay: There is a relay between Chains A and B to send the locking proof.
-
Validation: On Chain B, validate the blocks from Chain A. If they are correct, then 10 $ATOM will be created on Chain B.
At this time, the $ATOM on chain B is not the real $ATOM, it is just a voucher. The locked $ATOM on chain A cannot be used, but the $ATOM on chain B can be used normally. When users consume the vouchers on chain B, the locked $ATOM on chain A will also be destroyed.
However, the biggest challenge faced by cross-chain communication is not how to represent the data on one chain on another chain, but how to deal with situations such as chain forks and chain reorganizations.
Because each chain on Cosmos is an independent autonomous individual chain, they each have their own dedicated validators. Therefore, it is very likely that malicious behavior will occur in the partition. For example, if chain A transfers messages to chain B, it is necessary to verify the validators of chain B in advance before deciding whether to trust that chain.
For example, assuming that the small red dots in the image represent a token called ETM, and users from the ABC partitions all want to use EVMOS to run Dapps within their own partitions. Because asset transfers have been made through cross-chain communication, they have all received ETM.
If at this time, the Ethermint partition launches a double-spending attack, the ABC partitions will undoubtedly be affected, but it will be limited to this. The remaining networks unrelated to ETM will not receive any attacks. This is also guaranteed by Cosmos, that even if such malicious message transmission occurs, it still cannot affect the entire network.
2.2 Tendermint BFT
Cosmos adopts Tendermint BFT as the underlying consensus algorithm and consensus engine for Cosmos. It combines the underlying infrastructure and consensus layer of blockchain into a universal engine solution, and uses ABCI technology to support encapsulation of any programming language, thereby adapting to the underlying consensus layer and network. Therefore, developers can freely choose any language they like.
2.3 Cosmos SDK
Cosmos SDK is a modular framework launched by Cosmos, which simplifies the operation of building Dapps on the consensus layer. Developers can easily create specific applications/chains without having to rewrite the code for each module, greatly reducing development pressure. It is now also possible for developers to port applications deployed on EVM to Cosmos.
Source: https://v1.cosmos.network/intro
In addition, blockchain built using Tendermint and Cosmos SDK is also leading the industry in terms of new ecology and new technologies, such as the privacy chain Nym and Celestia, which provides data availability. It is precisely because of the flexibility and ease of use provided by Cosmos that developers can focus on project innovation without having to consider repetitive work.
2.4 Interchain Security Account
1) Interchain Security
Because Cosmos is different from the Ethereum ecosystem, which has L1 and L2, each application chain in the Cosmos ecosystem is equal to each other and there is no hierarchical relationship. However, due to this reason, the interchain security is not as perfect as Ethereum. In Ethereum, the final determinism of all transactions is confirmed by Ethereum, inheriting the underlying security. But for self-built monolithic blockchains, how should security be maintained?
Cosmos has launched Interchain Security, which essentially achieves shared security by sharing a large number of existing nodes. For example, a standalone chain can share a set of validation nodes with the Cosmos Hub to generate new blocks for the standalone chain. Because the nodes serve both the Cosmos Hub and the standalone chain, they can receive fees and rewards from both chains.
Source:https://medium.com/tokenomics-dao/token-use-cases-LianGuairt-1-atom-a-true-staking-token-5 fd 21 d 41161 e
As shown in the figure, the transactions originally generated within Chain X are generated and verified by the nodes of Chain X. If they share nodes with the Cosmos Hub ($ATOM), the transactions generated on Chain X will be verified and calculated by the nodes of the Hub chain to generate new blocks for Chain X.
In theory, choosing a chain with a large number of nodes and a more mature chain like the Hub chain is the preferred option for sharing security. Because if attackers want to attack such chains, they need to have a large amount of $ATOM tokens for staking, which increases the difficulty of the attack.
Moreover, the Interchain Security mechanism also greatly reduces the barriers to creating new chains. Generally speaking, if a new chain does not have particularly outstanding resources, it may take a lot of time to attract validators and cultivate an ecosystem. However, in Cosmos, because validators can be shared with the Hub chain, this greatly relieves the pressure on new chains and accelerates the development process.
2) Interchain Account
In the Cosmos ecosystem, because each application chain governs itself, applications cannot access each other. Therefore, Cosmos provides a cross-chain account that allows users to directly access all Cosmos chains that support IBC from the Cosmos Hub, so that users can access applications on Chain B from Chain A, achieving full chain interoperability.
II. Polkadot
Like Cosmos, Polkadot is committed to building an infrastructure that allows for the free deployment of new chains and interoperability between chains.
1. Structural Framework
1.1 Relay Chain:
The relay chain, also known as the main chain, can be understood as the sun in the solar system, the core part of the entire network, around which all the parachains rotate. As shown in the figure, a relay chain links many chains with different functions, such as transaction chains, file storage chains, and IoT chains, etc.
Source:https://medium.com/polkadot-network/polkadot-the-foundation-of-a-new-internet-e 8800 ec 81 c 7
This is Polkadot’s layered scaling solution, where a relay chain is connected to another relay chain, achieving unlimited scalability. (Note: at the end of June this year, the founder of Polkadot, Gavin, proposed Polkadot 2.0, which may change the way Polkadot is understood.)
1.2 Parallel Chains:
The relay chain has several parallel chain slots, and the parallel chains are connected to the relay chain through these slots, as shown in the figure:
Source: https://www.okx.com/cn/learn/slot-auction-cn
However, in order to obtain a slot, the participating parachains must stake their $DOT. Once a slot is obtained, the parachain can interact with the Polkadot mainnet through this slot and share security. It is worth mentioning that the number of slots is limited and will gradually increase. It is initially expected to support 100 slots, and the slots will be periodically reshuffled and allocated according to the governance mechanism to maintain the vitality of the parachain ecosystem.
Parachains that obtain slots can enjoy shared security and cross-chain liquidity in the Polkadot ecosystem. At the same time, parachains also need to provide certain returns and contributions to the Polkadot mainnet as a reward, such as handling most of the network’s transactions.
1.3 Parallel Threads:
Parallel threads are another processing mechanism similar to parachains, but the difference is that each parachain has a dedicated slot and can run continuously. However, parallel threads refer to sharing slots between parallel threads and taking turns to use the slot for operation.
When a parallel thread obtains the right to use a slot, it can temporarily work like a parachain, processing transactions, generating blocks, etc. However, when this time period ends, the slot must be released for other parallel threads to use.
Therefore, parallel threads do not require long-term asset collateral, only need to pay a certain fee when obtaining each time period, so it can be said to be a pay-as-you-go method to use the slot. Of course, if a parallel thread receives enough support and votes, it can be upgraded to a parachain and obtain a fixed slot.
Compared to parachains, parallel threads have lower costs, reduce the entry barrier for Polkadot, but cannot guarantee when the right to use a slot will be obtained, and are less stable. Therefore, they are more suitable for temporary use or testing of new chains. Chains that hope to operate stably still need to be upgraded to parachains.
1.4 Bridge:
Communication between parachains can be achieved through XCMP (as will be introduced later), and they share security and consensus. But what about heterogeneous chains?
It should be noted here that although the framework provided by Substrate makes the chains that access the Polkadot ecosystem homogeneous, as the ecosystem develops, there will inevitably be some mature and large-scale public chains that want to participate in the ecosystem. It is almost impossible for them to redeploy using Substrate. So how can communication between heterogeneous chains be realized?
Using a real-life example, if an iPhone wants to transfer files to an Android phone through a connection, a converter is needed to connect the different ports. This is the actual function of a bridge. It is a parallel chain that acts as an intermediary between the relay chain and the heterogeneous chain (external chain). It deploys smart contracts on the parallel chain and the heterogeneous chain, allowing the relay chain to interact with the external chain and achieve cross-chain functionality.
2. Key Technologies
2.1 BABEGrandLianGuai
BABE (Blind Assignment for Blockchain Extension) is the block production mechanism of Polkadot. In simple terms, it randomly selects validators to produce new blocks, and each validator is assigned to a different time slot. Only the validator assigned to a particular time slot can produce blocks during that time slot.
Additional explanation:
-
A time slot is a method used in the block production mechanism of a blockchain to divide the time sequence. The blockchain is divided into time slots that appear at fixed intervals. Each time slot represents a fixed block production time.
-
Within each time slot interval, only the nodes assigned to that time slot can produce blocks.
In other words, it is an exclusive time period. In time slot 1, validator 1, who is assigned to time slot 1, is responsible for block production. Each validator has a unique time slot and cannot produce blocks repeatedly.
The advantage of this approach is that random allocation maximizes fairness, as everyone has the opportunity to be assigned. And because the time slots are known, everyone can prepare in advance and unexpected block production will not occur.
Through this randomly allocated block production method, the orderly and fair operation of the Polkadot ecosystem is ensured. But how can we ensure that all blocks adopt the same consensus? Next, we will introduce another mechanism of Polkadot: GrandLianGuai.
GrandLianGuai is a mechanism for finalizing blocks. It can solve the fork problem that may occur during block production due to different consensuses. For example, if BABE node 1 and node 2 produce different blocks during the same time period, a fork occurs. At this time, GrandLianGuai comes into play and asks all validators: which chain do you think is better?
Validators will examine the two chains and vote for the one they consider better. The chain that receives the most votes will be confirmed by GrandLianGuai as the final chain, while the rejected chain will be discarded.
Therefore, GrandLianGuai acts as the “grandfather” of all validators, serving as the ultimate decision maker and eliminating the risk of forks that BABE may bring. It allows the blockchain to ultimately determine a chain that is accepted by everyone.
In summary, BABE is responsible for randomly producing blocks, and GrandLianGuai is responsible for selecting the final chain. The two work together to ensure the safe operation of the Polkadot ecosystem.
2.2 Substrate
Substrate is a development framework written in Rust language that uses the underlying scalable components provided by FRAME. It allows Substrate to support various use cases. Any blockchain built using Substrate is not only compatible with Polkadot natively, can share security with other parachains, and run concurrently, but also supports developers to build their own consensus mechanisms, governance models, etc. according to their needs and evolve continuously.
In addition, Substrate provides great convenience for self-upgrade because it is an independent module at runtime that can be separated from other components. Therefore, when updating functionality, this running module can be directly replaced. As a shared consensus parallel chain, as long as it maintains network and consensus synchronization with the relay chain, it can update the operating logic directly without generating a hard fork.
2.3 XCM
If XCM is to be explained in one sentence, it is: a cross-chain communication format that allows different blockchains to interact with each other.
For example, Polkadot has many parallel chains. If parallel chain A wants to communicate with parallel chain B, it needs to package the information in XCM format. XCM is like a language protocol that everyone uses to communicate, enabling seamless communication.
XCM format (Cross-Consensus Message Format) is the standard message format used for cross-chain communication in the Polkadot ecosystem, and it has derived three different message delivery methods:
-
XCMP (Cross-Chain Message Passing): under development. Messages can be transmitted directly or forwarded through the relay chain. Direct transmission is faster, while forwarding through the relay chain is more scalable but adds latency.
-
HRMP/XCMP-lite (Horizontal Relay Messaging Protocol): in use. It is a simplified version and alternative to XCMP, where all messages are stored on the relay chain, which currently handles most of the cross-chain messaging.
-
VMP (Vertical Messaging Protocol): under development. It is a protocol for vertically transmitting messages between the relay chain and parallel chains. Messages are stored on the relay chain and then passed on by the relay chain after parsing.
For example, because XCM format contains various information, such as the amount of assets to be transferred and the receiving account, when sending a message, the HRMP channel or the relay chain will transmit this XCM format message. The receiving parallel chain will check if the format is correct, then parse the message content, and finally execute the instructions in the message, such as transferring assets to the specified account. This achieves cross-chain interaction, and the two chains successfully communicate with each other.
XCM, as a communication bridge, is very important for multi-chain ecosystems like Polkadot.
After understanding Cosmos and Polkadot, I believe you have some understanding of their visions and frameworks. Next, we will explain in detail what the Stack solutions introduced by ETH L2s are.
III. OP Stack
1. Structural Framework
According to official documentation, OP Stack is a series of components maintained by the OP Collective. It first appears in the form of software behind the mainnet and eventually appears in the form of Optimism superchains and their governance. L2s developed using OP Stack can share security, communication layer, and general development stack. Developers can freely customize chains to serve any specific blockchain use case.
From the diagram, we can see that all superchains of OP Stack communicate through the OP Bridge superchain bridge and rely on Ethereum as the underlying secure consensus to build super L2 chains. The internal structure of each superchain is divided into:
1) Data Availability Layer: Chains that use OP Stack can use this data availability module to retrieve their input data. Because all chains get their data from this layer, it has a significant impact on security. If a piece of data cannot be retrieved from this layer, it may not be possible to synchronize the chain.
From the diagram, it can be seen that OP Stack uses Ethereum and EIP-4844, in other words, it essentially accesses data on the Ethereum blockchain.
2) Ordering Layer: The sequencer determines how user transactions are collected and published to the data availability layer. In OP Stack, a single dedicated sequencer is used. However, this may result in the sequencer not being able to retain transactions for too long. In the future, OP Stack will modularize the sequencer to allow chains to easily change the sequencing mechanism.
In the diagram, both a single sequencer and multiple sequencers can be seen. A single sequencer allows any party to act as a sequencer at any time (higher risk), while multiple sequencers are selected from a predefined group of potential participants. If multiple sequencers are chosen, each chain developed based on OP Stack can make a specific selection.
3) Derivation Layer: This layer determines how the processed input of the raw data availability is handled and transmitted to the execution layer through Ethereum’s API. From the image, OP Stack consists of Rollup and Indexer.
4) Execution Layer: This layer defines the state structure within the OP Stack system. When the engine API receives input from the derivation layer, it triggers state transitions. From the diagram, it can be seen that under OP Stack, the execution layer is EVM. However, with slight modifications, it can also support other types of VMs. For example, Pontem Network plans to develop a Move VM L2 using OP Stack.
5) Settlement Layer: As the name suggests, it is used to handle the withdrawal of assets from the blockchain. However, such withdrawals require proving the state of the target chain to a certain third-party chain and processing the assets based on that state. The key is to allow the third-party chain to understand the state of the target chain.
Once a transaction is published on the corresponding data availability layer and finally confirmed, the transaction is also finally confirmed on the OP Stack chain. It cannot be modified or deleted without breaking the underlying data availability layer. The transaction may not have been accepted by the settlement layer yet, as the settlement layer needs to verify the transaction result, but the transaction itself is immutable.
This is also a mechanism for heterogeneous chains, as the settlement mechanisms of heterogeneous chains vary. Therefore, in OP Stack, the settlement layer is read-only, allowing heterogeneous chains to make decisions based on the state of OP Stack.
In this layer, OP Stack uses fault proofs from OP Rollup. Proposers can present their questioned valid states, and if they are not proven wrong within a certain period of time, they will be automatically considered correct.
6) Governance Layer: From the diagram, it can be seen that OP Stack uses multi-signature + $OP tokens for governance. Multi-signature is typically used for managing upgrades to the Stack system components. When all participants have signed, the operation is executed. $OP token holders can vote and participate in governance of the community DAO.
OP Stack is a combination of Cosmos and Polkadot, which can customize exclusive chains like Cosmos and share security and consensus like Polkadot.
2. Key Technologies
2.1 OP Rollup
OP Rollup ensures security through data availability challenges and allows parallel execution of transactions. The specific implementation steps are as follows:
1) Users initiate transactions on L2.
2) The Sequencer batch processes the transactions and synchronizes the processed transaction data and new state root to the smart contract deployed on L1 for security verification. It is worth noting that the Sequencer generates its own state root while processing transactions and synchronizes it to L1.
3) After verification, L1 returns the data and state root to L2, and the transaction status of the user is securely verified and processed.
4) At this point, OP Rollup considers the state root generated by the Sequencer as optimistic and correct. It opens a time window for validators to challenge whether the state root generated by the Sequencer matches the transaction state root.
5) If no validators verify during the time window, the transaction is automatically considered correct. If malicious fraud is detected, the Sequencer handling the transaction will be punished accordingly.
2.2 Cross-Chain Bridging
a) Intra-L2 Message Passing
Because OP Rollup uses fraud proofs, transactions need to wait for challenges to be completed, which takes a long time and leads to a lower user experience. However, ZKP (Zero-Knowledge Proof) is costly and prone to errors, and it also takes time to implement batch ZKP.
To solve the communication problem between L2 OP chains, OP Stack proposes modular proofs: using two proof systems for the same chain, developers building L2 Stacks can freely choose any bridging type.
Currently, OP provides:
-
High-security, high-latency fault-tolerant (standard high-security bridge)
-
Low-security, low-latency fault-tolerant (short challenge period for low latency)
-
Low-security, low-latency validity proof (using trusted chain provers instead of ZKP)
-
High-security, low-latency validity proof (after ZKP is ready)
Developers can choose bridging focuses according to the needs of their own chains, for example, high-security bridging can be chosen for high-value assets… Diverse bridging technologies allow efficient movement of assets and data between different chains.
b) Cross-Chain Transactions
Traditional cross-chain transactions are completed asynchronously, which means that transactions may not be fully executed.
To address this issue, OP Stack proposes the idea of a shared sequencer. For example, if a user wants to perform cross-chain arbitrage, A chain and B chain can share a Sequencer, achieving consensus on the timing of transactions. Fees will only be paid after the transactions are on-chain, and both Sequencers bear the risk together.
c) Cross-Chain Transactions
Due to the limited scalability of Ethereum L1’s data availability (limited capacity), it is not scalable to publish transactions to the superchain.
Therefore, in the OP Stack, the use of the Plasma protocol is proposed to expand the amount of data accessible by the OP chain, replacing DA (data availability) to supplement more L1 data. By sinking transaction data availability to the Plasma chain and only recording data commitments on L1, scalability is greatly improved.
IV. ZK Stack
1. Structure Framework
ZK Stack aims to build an open-source, composable, and modular code with the same underlying technology (ZK Rollup) as zkSync Era, allowing developers to customize their own ZK-driven L2 and L3 superchains.
Since ZK Stack is free and open-source, developers have the freedom to customize superchains according to their specific needs. Whether it is choosing a second-layer network running in parallel with zkSync Era or a third-layer network running on top of it, the possibilities for customization will be extensive.
According to Matter Labs, creators have complete autonomy to customize and shape various aspects of the chain, from choosing data availability modes to using their own project’s token for decentralized ordering.
Of course, these ZK Rollup superchains operate independently but rely on Ethereum L1 for security and validation.
Source: zkSync Document
From the figure, it can be seen that each superchain must use zkSync L2’s zkEVM engine to share security. Multiple ZKP chains run concurrently and aggregate block proofs on the settlement layer of L1, allowing for continuous expansion and the construction of more L3, L4…
2. Key Technologies
1) ZK Rollup
ZK Stack is built on ZK Rollup as the core technology, with the following main user processes:
Users submit their transactions, and the Sequencer batches the transactions into ordered batches and generates validity proofs (STARK/SNARK) on its own for state updates. The updated state is then submitted to the smart contract deployed on L1 for verification. If the verification passes, the asset state on the L1 layer is also updated. The advantage of ZK Rollup is the ability to mathematically verify through zero-knowledge proofs, resulting in higher technical and security levels.
2) Cross-Chain Bridge
In the structure framework mentioned above, ZK Stack can achieve infinite scalability, continuously generating L3, L4, and so on. But how can interoperability between superchains be achieved?
ZK Stack introduces cross-chain bridges, which verify transactions occurring on superchains through Merkle proofs deployed on L1, essentially similar to ZK Rollup, but the transition is from L3-L2 instead of L2-L1.
ZK Stack supports smart contracts on various chains, allowing cross-chain asynchronous calls to each other. Users can quickly transfer their assets in a few minutes in a trustless manner without any additional costs. For example, in order to process a message on receiving chain B, the sending chain A must always confirm its state until the earliest chain shared by A and B is determined as the final state. Therefore, in practice, the communication delay of the super bridge is only a matter of a few seconds, and the chain can complete blocks every second and at a lower cost.
Source: https://era.zksync.io/docs/reference/concepts/hyperscaling.html#l3s
Not only that, but L3 can also use compression technology to package the proofs. L2 will further expand the packaging, resulting in a greater compression ratio and lower cost (recursive compression), which can achieve trustless, fast (within minutes), and cheap (single transaction cost) cross-chain interoperability.
5. Polygon 2.0
Polygon is a special L2 solution, technically it is L1, serving as a sidechain of Ethereum. The Polygon team recently announced the Polygon 2.0 plan, which allows developers to create their own ZK L2 chains using ZK and unify them through a novel cross-chain coordination protocol, making the entire network feel like using a single chain.
Polygon 2.0 aims to support an unlimited number of chains, and cross-chain interactions can occur securely and instantly without additional security or trust assumptions, achieving infinite scalability and unified liquidity.
1. Structural Framework
Source: Polygon Blog
Polygon 2.0 consists of 4 protocol layers:
1) Staking Layer
The staking layer is a PoS (Proof-of-Stake) based protocol that uses staked $MATIC for decentralized governance, efficient governance of validators, and improved miner efficiency.
As shown in the diagram, the staking layer of Polygon 2.0 introduces the Validator Manager and Chain Manager.
-
Validator Manager: It manages a common pool of validators for all Polygon 2.0 chains. This includes validator registration, staking requests, unstaking requests, etc. It can be imagined as the administrative department for validators.
-
Chain Manager: It manages the set of validators for each Polygon 2.0 chain, focusing more on the validation management of each chain. Unlike the Validator Manager, which is a public service, each Polygon chain has its own Chain Manager contract. It mainly focuses on the number of validators for each corresponding chain (related to the level of decentralization), additional requirements for validators, and other conditions.
The staking layer has already established the underlying architecture for the rules of each chain, allowing developers to focus on the development of their own chains.
Source: Polygon Blog
2) Interoperability Layer
Interoperability protocols are crucial for the connectivity of the entire network. Achieving secure and seamless cross-chain messaging is something that every blockchain solution should continuously improve.
Currently, Polygon uses two contracts for support: Aggregator and Message Queue.
-
Message Queue: It is mainly designed for the existing Polygon zkEVM protocol. Each Polygon chain maintains a local message queue in a fixed format, and these messages are included in the ZK proofs generated by the chain. Once the ZK proof is verified on Ethereum, any messages from the queue can be safely used by the receiving chain and address.
-
Aggregator: The existence of the aggregator aims to provide more efficient services between Polygon chains and Ethereum. For example, aggregating multiple ZK proofs into one and submitting it to Ethereum for verification reduces storage costs and improves performance.
Once the ZK proof is accepted by the aggregator, the receiving chain can start optimistically accepting messages because it trusts the ZK proof, thus achieving seamless message delivery.
3) Execution Layer
The execution layer enables any Polygon chain to generate batches of ordered transactions, also known as blocks. Most blockchain networks (such as Ethereum, Bitcoin, etc.) use a similar format.
The execution layer consists of multiple components, such as:
-
Consensus: Enables validators to reach consensus.
-
Mempool: Collects transactions submitted by users and synchronizes them among validators. Users can also check the status of their transactions in the mempool.
-
P2P: Enables validators and full nodes to discover each other and exchange messages.
-
…
Given that this layer has been commoditized, existing high-performance implementations (such as Erigon) should be reused as much as possible.
4) Proof Layer
The proof layer generates proofs for each Polygon chain. It is a high-performance and flexible ZK proof protocol, typically consisting of the following components:
-
Common Prover: A high-performance ZK prover that provides a clean interface, aiming to support any transaction type, i.e., state machine format.
-
State Machine Constructor: It defines the framework for the state machine, used to build the initial Polygon zkEVM. This framework abstracts the complexity of the proof mechanism and simplifies it into a user-friendly, modular interface, allowing developers to customize parameters and build their own large-scale state machines.
-
State Machine: It simulates the execution environment and transaction format that the prover is proving. The state machine can be implemented using the above constructor or completely customized, for example, using Rust.
2. Key Technologies
Source: Polygon Blog
1) zkEVM validium
In the Polygon 2.0 update, the team retains the original Polygon POS while upgrading it to zkEVM validium.
Source: Polygon Blog
Here’s a brief introduction: Validium and Rollup are both Layer 2 solutions designed to increase Ethereum’s transaction capacity and reduce transaction time. Compared to each other:
-
Rollup bundles many transactions together and submits them as a batch to the Ethereum mainnet, utilizing Ethereum to publish transaction data and validate proofs, thus inheriting its unparalleled security and decentralization. However, publishing transaction data to Ethereum is costly and limits throughput.
-
Validium does not need to submit all transaction data to the mainnet. It uses Zero-Knowledge Proofs (ZKP) to prove the validity of transactions, while providing transaction data off-chain. It also protects user privacy. However, Validium requires trust in the execution environment and is relatively more centralized.
You can think of Validium as a lower-cost and more scalable Rollup. However, the previous Polygon zkEVM (Polygon POS mechanism) operated as a (ZK) Rollup and achieved remarkable results. Within just 4 months of its launch, its TVL has surged to $33 million.
Source: Defilama
In the long run, the cost of generating proofs for zkEVM based on Polygon PoS may become a bottleneck for future scalability. Although the Polygon team has been committed to reducing batch costs and has already reduced the cost to $0.0259 for proving 10 million transactions, why not use Validium with even lower costs?
Polygon has already released documentation stating that in future versions, Validium will take over the previous POS’s work while retaining POS. The main role of POS validators will be to ensure data availability and sort transactions.
The upgraded zkEVM Validium will provide high scalability and low costs. It is especially suitable for applications with high transaction volume and low transaction fees, such as Gamefi, Socialfi, and DeFi. For developers, there is no need for any additional operations; just follow the mainnet updates to complete the Validium upgrade.
2) zkEVM rollup
Currently, Polygon PoS (soon to be upgraded to Polygon Validium) and Polygon zkEVM Rollup are two public networks in the Polygon ecosystem. After the upgrade, both networks will continue to use cutting-edge zkEVM technology, with one serving as an aggregator and the other as a verifier, bringing additional benefits.
Polygon zkEVM Rollup already provides the highest level of security, but at the cost of slightly higher fees and limited throughput. However, it is well-suited for applications that prioritize high-value transactions and security, such as high-value DeFi Dapps.
Six. Arbitrum Orbit
As the current major L2 public chain, Arbitrum has surpassed $5.1 billion in TVL since its launch in August 2021, occupying nearly 54% of the L2 market share.
In March of this year, Arbitrum released the Orbit version, and prior to that, Arbitrum issued a series of ecological products:
-
Arbitrum One: The first and core mainnet Rollup of the Arbitrum ecosystem.
-
Arbitrum Nova: The second mainnet Rollup of Arbitrum, targeting projects that are cost-sensitive and have high transaction volume requirements.
-
Arbitrum Nitro: A technical software stack that supports Arbitrum L2, enabling Rollups to be faster, cheaper, and more compatible with the EVM.
-
Arbitrum Orbit: A development framework for creating and deploying L3 on the Arbitrum mainnet.
Today, we will focus on Arbitrum Orbit.
1. Structural Framework
Originally, if developers wanted to use Arbitrum Orbit to create an L2 network, they would first propose it, and the Arbitrum DAO would vote on it. If approved, a new L2 chain would be created. However, there is no need for permission to develop L3, 4, 5, etc. on L2. Anyone can provide an permissionless framework for deploying custom chains on Arbitrum L2.
Source: Whitepaper
As can be seen, Arbitrum Orbit also aims to allow developers to customize their own Oribit L3 chain based on Layer 2 technologies such as Arbitrum One, Arbitrum Nova, or Arbitrum Goerli. Developers can customize the chain’s privacy protocols, licenses, token economic models, community management, etc., giving developers maximum autonomy.
Of note, Oribit allows L3 chains to use the token of the chain itself as the unit of fee settlement, effectively developing its own network.
2. Key Technologies
1) Rollup AnyTrust
These two protocols respectively support Arbitrum One and Arbitrum Nova. As previously mentioned, Arbitrum One is a core mainnet Rollup, and Arbitrum Nova is the second mainnet Rollup, which integrates the AnyTrust protocol to expedite settlement and reduce costs by introducing a “trust assumption”.
Arbitrum Rollup is an OP Rollup, so we won’t go into too much detail, but we will focus on the AnyTrust protocol for further analysis.
The AnyTrust protocol primarily manages data availability and is overseen by a series of third-party organizations such as the DAC (Data Availability Committee). By introducing a “trust assumption”, transaction costs can be greatly reduced. The AnyTrust chain operates on Arbitrum One as a sidechain, with lower costs and faster transaction speeds.
So, what exactly is the “trust assumption” and why does it reduce transaction costs and require less trust?
According to Arbitrum’s official documentation, the AnyTrust chain is operated by a node committee, which determines how many committee members are honest based on the minimal assumption. For example, if the committee consists of 20 people, it is assumed that at least 2 members are honest. Compared to BFT, which requires 2/3 of members to be honest, AnyTrust indeed lowers the threshold of trust to a minimum.
In a transaction, because the committee promises to provide transaction data, the nodes do not need to record all the data of the L2 transactions on L1, but only need to record the hash value of the transaction batch, which greatly saves the cost of Rollup. This is also why AnyTrust Chain can reduce transaction costs.
Regarding the issue of trust, as mentioned earlier, assuming that only 2 out of 20 members are honest, and assuming the establishment. As long as 19 committee members sign the commitment to the correctness of this transaction, it can be safely executed. So even if the member who has not signed is honest, there must be at least 1 honest member among the 19 who have signed.
What if members don’t sign or a significant number of members refuse to cooperate, causing the system to fail to operate normally? AnyTrust Chain can still operate, but it will fall back to the original Rollup protocol, and the data will still be published on Ethereum L1. When the committee is operating normally, the chain will switch back to a cheaper and faster mode.
The reason why Aribtrum introduced this protocol is to meet the needs of applications that require high processing speed and low cost, such as the Gamefi field.
2) Nitro
Nitro is the latest version of Arbitrum technology, and its main element is the validator (Prover), which uses WASM code for traditional interactive fraud proof of Arbitrum. And its various components are already complete, Arbitrum completed the upgrade at the end of August 2022, seamlessly migrating/upgrading the existing Arbitrum One to Aribitrum Nitro.
Nitro has the following characteristics:
-
Two-stage transaction processing: User transactions are first consolidated into a single ordered sequence, and then Nitro submits this sequence to process transactions in order and achieve deterministic state transitions.
-
Geth: Nitro adopts the most widely supported Ethereum client Geth (go-ethereum) to support Ethereum’s data structure, format, and virtual machine, which can better compatible with Ethereum.
-
Separate execution and proof: Nitro compiles the same source code twice, once as native code to execute transactions in Nitro nodes, and again as WASM for proof.
-
OP Rollup with interactive fraud proof: Nitro uses OP Rollup, including Arbitrum’s innovative interactive fraud proof, to settle transactions on the first layer of the Ethereum chain.
These features of Arbitrum provide technical support for L3 and L4 use cases, and Arbitrum can attract developers who seek customization to create their own custom chains.
Seven. Starknet Stack
Eli Ben-Sasson, co-founder of StarkWare, announced at the EthCC conference in Paris that Starknet is about to launch the Starknet Stack, allowing any application to deploy its own Starknet application chain in a permissionless manner.
The key technologies in Starknet, such as STARK proofs, Cairo programming language, and native account abstraction, provide the momentum guarantee for the rapid development of Starknet. When developers use Stack to customize their own Starknet application chains, they can greatly expand network throughput, alleviate congestion on the mainnet, and achieve scalability.
Although Starknet is currently only a preliminary concept and has not yet released official technical documentation, the Madara Sequencer and LambdaClass are developing Starknet-compatible sequencer and stack components to better adapt to Starknet. The official team is also making efforts for the upcoming Starknet Stack, including the development of components such as full nodes, execution engines, and verification.
It is worth noting that StarkNet recently submitted a proposal for a “Simple Decentralized Protocol” in hopes of changing the current situation where L2s are operated by a single sequencer. While Ethereum is decentralized, L2s are not, and their MEV income has corrupted the sequencer.
In the proposal, StarkNet lists some solutions, such as:
-
L1 Staking and Leader Election: Community members can stake on Ethereum without permission to join the Staker set. Based on the distribution of assets in the set and a random number on the L1 chain, a group of Stakers is randomly selected as the Leader responsible for block production for an epoch. This not only lowers the barrier for Staker users, but also effectively prevents gray income from MEV through randomness.
-
L2 Consensus Mechanism: Based on Tendermint, it is a Byzantine fault-tolerant consensus mechanism where the Leader participates as a node. After consensus is confirmed, the Voter executes, and the Proposer calls the Prover to generate ZKP.
In addition, there are proposals for ZK proofs, L1 state updates, and other solutions. Combined with the previous major move to support community operation of Prover code without permission, this proposal by StarkNet aims to address the lack of decentralization in L2 and attempts to balance the blockchain trilemma, which is highly remarkable.
Source: https://starkware.co/resource/the-starknet-stacks-growth-spurt/
8. Conclusion
This chapter explains the technical aspects of CP and various Layer 2 Stacks. It can be observed that the current Layer 2 Stack solutions effectively address Ethereum’s scalability issues but also bring a series of challenges and problems, especially in terms of compatibility. The technology in L2 Stack solutions is not as mature as CP, and even CP’s technology concepts from three to four years ago are still worth learning from for current L2s. Therefore, in terms of technology, CP is still far ahead of Layer 2. However, having advanced technology alone is not enough. In the next article, we will discuss the value of tokens and the development of the ecosystem to explore the advantages, disadvantages, and characteristics of CP and L2 Stacks, and enhance readers’ perspectives.
References:
https://medium.com/@eternal1997
https://medium.com/polkadot-network/a-brief-summary-of-everything-substrate-and-polkadot-f1f21071499d
https://tokeneconomy.co/the-state-of-crypto-interoperability-explained-in-pictures-654cfe4cc167
https://research.web3.foundation/Polkadot/overview
https://foresightnews.pro/article/detail/16271
https://v1.cosmos.network/
https://polkadot.network/
https://messari.io/report/ibc-outside-of-cosmos-the-transport-layer?referrer=all-research
https://stack.optimism.io/docs/understand/explainer/#glossary
https://www.techflowpost.com/article/detail_12231.html
https://gov.optimism.io/t/retroactive-delegate-rewards-season-3/5871
https://wiki.polygon.technology/docs/supernets/get-started/what-are-supernets/
https://polygon.technology/blog/introducing-polygon-2-0-the-value-layer-of-the-internet
https://era.zksync.io/docs/reference/concepts/hyperscaling.html#what-are-hyperchains
https://medium.com/offchainlabs
We will continue to update Blocking; if you have any questions or suggestions, please contact us!
Was this article helpful?
93 out of 132 found this helpful
Related articles
- Inspiration from the (3,3) and ve(3,3) flywheel models How to create a Ponzi scheme on friend.tech?
- Revolutionary Progress of Zero-Knowledge Proof Technology In-depth Exploration of the Nova Algorithm
- Restaking King Is EigenLayer’s business model a great idea or a waste?
- IDO&IEO Inventory of 8 Hot Projects to Be Launched Soon (September First Wave)
- Financial History, Legal System, and Technological Cycle The Trillion Dollar Narrative of RWA Cannot Withstand Scrutiny
- Financial History, Legal System, and Technological Cycles The Trillion-dollar Narrative of RWA Cannot Withstand Scrutiny
- In-depth analysis of Flashbots’ investment logic, technical framework, market size, and major risks.