Reshaping the Boundaries of Computing The Current Situation and Outlook of Decentralized Computing Power

Revolutionizing Computing The Evolution and Future of Decentralized Computing Power

Computing Power in Demand

Since the release of “Avatar” in 2009, which opened the first battle of 3D movies with its unparalleled realistic visuals, Weta Digital, as the unsung hero behind it, contributed to the entire film’s visual effects rendering work. In its 10,000 square foot server farm in New Zealand, its cluster of computers processes up to 1.4 million tasks per day, handling 8GB of data per second. Even so, it took over a month of continuous operation to complete all the rendering work.

The massive machine calls and cost investments made “Avatar” achieve remarkable achievements in the history of cinema.

On January 3 of the same year, Satoshi Nakamoto mined the genesis block of Bitcoin on a small server in Helsinki, Finland, and received a block reward of 50 BTC. Since the birth of cryptocurrency, computing power has played a crucial role in the industry.

The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power.

—— Bitcoin WhiteLianGuaiper

In the context of PoW consensus mechanism, computing power contributes to the security of the chain. At the same time, the continuously increasing hashrate also confirms the ongoing investment in mining power and positive income expectations for miners. The real demand for computing power in the industry has greatly driven the development of chip manufacturers. Mining machine chips have gone through stages of development such as CPU, GPU, FPGA, ASIC, etc. Currently, Bitcoin mining machines typically use ASIC (Application Specific Integrated Circuit) technology, which efficiently executes specific algorithms, such as SHA-256. The huge economic benefits brought by Bitcoin also drive the increasing demand for mining power, but the overly specialized devices and network effects result in a capital-intensive and centralized development trend for participants, whether they are miners or mining machine manufacturers.

Reshaping the Boundaries of Computing: Status and Outlook of Decentralized Computing Power

With the advent of Ethereum’s smart contracts and its characteristics such as programmability and composability, it has been widely used, especially in the field of DeFi. This has caused the price of ETH to rise steadily, and Ethereum, which is still in the PoW consensus stage, has also seen a gradual increase in mining difficulty. Miners’ demand for Ethereum mining machines has also grown day by day. However, unlike Bitcoin, which uses ASIC chips, Ethereum requires the use of graphics processing units (GPUs) for mining calculations, such as Nvidia’s RTX series. This makes it more suitable for participation with general computing hardware, and even caused a situation in the market where high-end graphics cards were in short supply.

Reshaping the Boundaries of Computing: Status and Outlook of Decentralized Computing Power

When it comes to November 30, 2022, ChatGPT developed by OpenAI also demonstrated the groundbreaking significance of AI. Users were amazed by the new experience brought by ChatGPT, which can fulfill various requests from users like a real person based on context. In the new version launched in September this year, the generative AI with multimodal features such as voice and image has taken the user experience to a new stage.

However, on the other hand, GPT4 involves over trillions of parameters in model pre-training and subsequent fine-tuning. These are the two parts with the highest computational power requirements in the field of AI. In the pre-training stage, it learns a large amount of text to grasp language patterns, grammar, and contextual associations. This enables it to understand language rules and generate coherent and contextually relevant text based on input. After pre-training, GPT4 undergoes fine-tuning to better adapt to specific types of content or styles, enhancing performance and specialization for specific demand scenarios.

Due to the use of the Transformer architecture in GPT, the introduction of self-attention mechanism allows the model to simultaneously focus on the relationship between different parts of the input sequence when processing the sequence. As a result, the computational power requirement grows sharply, especially when processing long sequences, requiring a large amount of parallel computation and storing numerous attention scores, as well as a substantial amount of memory and high-speed data transmission capabilities. The mainstream LLM architecture based on this also indicates the enormous cost of investing in the field of AI large models. According to speculation and estimation by relevant SemiAnalysis, the cost of one GPT4 model training can be as high as $63 million. To achieve a good interactive experience, GPT4 also requires a significant amount of computational power to maintain its daily operations.

Classification of Computational Hardware

Here, we need to understand the main types of computational hardware currently available, and how CPU, GPU, FPGA, and ASIC respectively meet different computational power requirements.

• Looking at the architecture diagrams of CPU and GPU, GPUs have more cores, enabling them to handle multiple computing tasks simultaneously with stronger parallel computing capabilities. They are suitable for handling a large number of computational tasks, which is why they have been widely used in the fields of machine learning and deep learning. CPUs have fewer cores, making them better suited for handling single complex calculations or sequential tasks in a more centralized manner. However, they are less efficient than GPUs when it comes to processing parallel computing tasks. In rendering tasks and neural network computations, typically involving a large amount of repetitive and parallel computations, GPUs are more efficient and suitable compared to CPUs in this aspect.

Reshaping the Boundaries of Computing: Current Status and Prospects of Decentralized Computing Power

• Field Programmable Gate Arrays (FPGAs) are customizable logic gate arrays that serve as a type of semi-custom circuitry in the field of application-specific integrated circuits (ASICs). Consisting of numerous small processing units, FPGAs can be understood as programmable digital logic circuit integrated chips. Currently, their applications mainly focus on hardware acceleration, while other tasks are still completed on CPUs, allowing FPGA and CPU to work together.

• ASIC (Application Specific Integrated Circuit) is a type of integrated circuit designed for specific user requirements and specific electronic systems. Compared to general integrated circuits, ASICs have advantages such as smaller size, lower power consumption, improved reliability, enhanced performance, increased confidentiality, and reduced cost when mass-produced. Therefore, in the inherent scenario of Bitcoin mining, where only specific calculation tasks need to be performed, ASIC is the most suitable option. Google has also introduced TPU (Tensor Processing Unit) as a type of ASIC specifically designed for machine learning, but currently, it mainly provides computing power rental services through Google Cloud.

• Compared to FPGA, ASIC is a specialized integrated circuit, and once the design is completed, the integrated circuit is fixed. FPGA, on the other hand, integrates a large number of digital circuit basic gate circuits and memories in an array, and developers can define circuits by burning FPGA configurations, and this burning process is replaceable. However, in the current field of AI, with frequent updates, customized or semi-customized chips cannot execute different tasks or adapt to new algorithms in a timely manner through configuration adjustments. Therefore, the widespread adaptability and flexibility of GPUs have made them shine in the field of AI. Major GPU manufacturers have also optimized GPUs for AI applications. For example, Nvidia has launched the Tesla series and Ampere architecture GPUs specifically designed for deep learning. These hardware include hardware units optimized for machine learning and deep learning computations (Tensor Cores), which allow GPUs to execute neural network forward and backward propagation with higher efficiency and lower energy consumption. In addition, a wide range of tools and libraries such as CUDA (Compute Unified Device Architecture) are provided to support AI development by enabling developers to utilize GPUs for general-purpose parallel computing.

Decentralized Computing Power

Decentralized computing power refers to the provision of processing capabilities through distributed computing resources. This decentralized approach usually combines blockchain technology or similar distributed ledger technologies to pool and distribute idle computing resources to users in order to achieve resource sharing, trading, and management.

Background

Strong demand for computing power hardware. The prosperity of the creator economy has led to the era of mass creation in the digital media processing field. The increasing demand for visual effects rendering has led to the emergence of specialized rendering outsourcing studios, cloud rendering platforms, and other forms. However, these methods also require significant upfront investment in computing power hardware.

Single source of computing power hardware. The development of the AI field has further intensified the demand for computing power hardware. GPU manufacturing companies, led by Nvidia, have earned a fortune in this AI computing power race. Their supply capacity has even become a crucial factor that can restrict the development of certain industries. Nvidia’s market value also surpassed $1 trillion for the first time this year.

Reliance on centralized cloud platforms for computing power provisioning. Currently, the centralized cloud providers, with AWS as a representative, are the ones benefiting the most from the surge in high-performance computing demands. They have launched GPU cloud computing services. Taking AWS p4d.24xlarge as an example, renting such a specialized HPC server dedicated to ML includes 8 Nvidia A100 40GB GPUs, and it costs $32.8 per hour. The estimated gross profit margin for this service is 61%. This has led other cloud giants to compete and stockpile hardware to gain an advantageous position in the industry’s early development stage.

Political and human intervention have led to imbalances in industry development. It is not difficult to see that the ownership and concentration of GPUs are more tilted towards organizations and countries with abundant funds and technology, and they rely on relationships with high-performance computing clusters. This has led to countries like the United States, as a representative of chip and semiconductor manufacturing powerhouses, implementing stricter restrictions on the export of AI chips to weaken other countries’ research capabilities in the field of general artificial intelligence.

The allocation of computational resources is overly centralized. The initiative in the AI field is held by a few giant companies, currently represented by OpenAI, backed by Microsoft and benefiting from the abundant computational resources provided by Microsoft Azure. This makes every new product release by OpenAI reshape and integrate the AI industry, making it difficult for other teams to catch up in the field of large models.

So, in the face of high hardware costs, geographical limitations, and uneven industrial development, are there any other solutions?

This is where decentralized computational power platforms come into play, with the goal of creating an open, transparent, and self-regulating market to more effectively utilize global computing resources.

Adaptability Analysis

1. Decentralized computational power supply side

The high cost of hardware and artificial control on the supply side provide the foundation for the construction of decentralized computational power networks.

Looking at the composition of decentralized computational power, diverse computational power providers range from individual PCs and small IoT devices to data centers and IDCs. The accumulation of a large amount of computational power can provide more flexible and scalable computing solutions, helping more AI developers and organizations make more efficient use of limited resources. This can be achieved by utilizing idle computational power from individuals or organizations, but the availability and stability of this computational power are constrained by the limits set by the users themselves or the sharing limits.

Potential sources of high-quality computational power are mining farms that transition to providing computational power directly after Ethereum’s shift to PoS. For example, Coreweave, the leading integrated GPU computational power provider in the United States, was formerly the largest Ethereum mining farm in North America and is based on a complete infrastructure. In addition, retired Ethereum mining rigs include a large number of idle GPUs. It is reported that during the peak of Ethereum mining, there were approximately 27 million GPUs working on the web, and activating these GPUs can further become an important source of computational power for decentralized computational power networks.

2. Decentralized computational power demand side

From a technical perspective, decentralized computational power resources are applicable to tasks such as graphic rendering and video transcoding, which involve less computational complexity. By combining blockchain technology and the web3 economic system to ensure the secure transmission of information and data, it brings tangible incentives to network participants and accumulates effective business models and customer bases. On the other hand, the AI field involves a large amount of parallel computing, communication between nodes, synchronization, and high requirements for network environments. Therefore, the current applications are mainly focused on fine-tuning, inference, AIGC, and other application layers.

From a business logic perspective, the market for pure buying and selling of computing power is unimaginative. The industry can only focus on the supply chain and pricing strategies, which happen to be the advantages of centralized cloud services. Therefore, the market has low limits and lacks room for imagination. This is why we can see networks that were originally focused on simple graphic rendering seeking AI transformation. For example, Render Network and 2023 Q1 have launched the native integration of the Stability AI toolkit, allowing users to introduce Stable Diffusion jobs, expanding their business beyond rendering tasks into the field of AI.

From the perspective of the main customer base, it is obvious that large-scale B-side customers will be more inclined towards centralized integrated cloud services. They usually have sufficient budgets and are engaged in the development of underlying large models, requiring more efficient forms of computing power aggregation. Therefore, decentralized computing power is more commonly used by small to medium-sized development teams or individuals who focus on fine-tuning models or application layer development, and do not have high requirements for the form of computing power. They are more price-sensitive, and decentralized computing power can fundamentally reduce the initial investment costs, resulting in lower overall usage costs. Based on Gensyn’s cost estimates, the cost of converting computing power into equivalent computing power provided by V100 is only $0.4 per hour, compared to AWS’s equivalent computing power which costs $2 per hour, resulting in an 80% decrease. Although this segment of the business does not constitute a large portion of the current industry’s expenses, with the continuous expansion of AI applications, the future market size should not be underestimated.

Reshaping the boundaries of computing: The current situation and prospects of decentralized computing power

From the perspective of services provided, it can be observed that the current projects resemble the concept of decentralized cloud platforms, providing a complete set of management processes from development, deployment, launch, distribution, to transactions. The benefit of this approach is to attract developers, who can use relevant tool components to simplify development and deployment, thereby improving efficiency. At the same time, it can attract users to the platform to use these complete application products, forming an ecological moat based on the network’s own computing power. However, this also presents higher requirements for project operation. It is particularly important to attract excellent developers and users and achieve retention.

Applications in different fields

1. Digital media processing

Render Network, a blockchain-based global rendering platform, aims to help creators with their digital creativity. It allows creators to expand GPU rendering work to global GPU nodes on demand, providing a faster and more cost-effective rendering capacity. After the creators confirm the rendering results, the blockchain network sends token rewards to the nodes. Compared to traditional methods of achieving visual effects, which require expensive upfront investment in establishing rendering infrastructure locally or increasing GPU expenses in purchased cloud services, Render Network offers a more affordable solution.

Reshaping the Boundaries of Computing: The Current Situation and Outlook of Decentralized Computing Power

Since its establishment in 2017, Render Network users have rendered over 16 million frames and nearly 500,000 scenes online. Data from Render Network’s 2023 Q2 release also indicates a growing trend in rendering frame jobs and active node numbers. In addition, Render Network also launched native integration with the Stability AI toolset in 2023 Q1, allowing users to introduce Stable Diffusion jobs and expand their business beyond rendering into the AI field.

Reshaping the Boundaries of Computing: The Current Situation and Outlook of Decentralized Computing Power

Livepeer allows network participants to contribute their own GPU computing power and bandwidth to provide real-time video transcoding services for creators. Broadcasters can send videos to Livepeer for various types of transcoding and distribution to end users, enabling the dissemination of video content. At the same time, users can easily pay for services such as video transcoding, transmission, and storage using fiat currency.

Reshaping the Boundaries of Computing: The Current Situation and Outlook of Decentralized Computing Power

In the Livepeer network, anyone can contribute their personal computer resources (CPU, GPU, and bandwidth) to transcode and distribute videos in exchange for fees. The native token (LPT) represents the rights of network participants, with the quantity of staked tokens determining the node’s weight in the network and influencing their chances of receiving transcoding tasks. Furthermore, LPT helps guide nodes in completing assignments securely, reliably, and quickly.

2. Expansion in the AI Field

In the current AI ecosystem, the main participants can be roughly divided into:

Reshaping the Boundaries of Computing: The Current Situation and Outlook of Decentralized Computing Power

Starting from the demand side, there are significant differences in compute requirements at different stages of the industry. Taking underlying model development as an example, pre-training highly demands parallel computing, storage, communication, etc. This requires the completion of related tasks through large-scale compute clusters. Currently, the main supply of computing power still relies on self-built data centers and centralized cloud service platforms. In subsequent processes such as model fine-tuning, real-time inference, and application development, the demand for parallel computing and node-to-node communication is not as high, making it a suitable area for decentralized computing power to shine.

Among the projects that have garnered attention, Akash Network has made attempts in the field of decentralized computing power:

Akash Network combines different technology components to allow users to deploy and manage applications efficiently and flexibly in a decentralized cloud environment. Users can use Docker container technology to package applications and then deploy and scale them on the cloud resources provided by Akash through CloudMOS. Akash adopts a “reverse auction” mechanism, making its prices lower than traditional cloud services.

Akash Network announced in August this year that it will launch its 6th mainnet upgrade, which will include support for GPUs in its cloud services, providing computing power to more AI teams in the future.

Reshaping the Computing Boundary: The Current Situation and Prospects of Decentralized Computing Power

Gensyn.ai, a highly anticipated project in the industry this year, completed a $43 million series A financing led by a16z. Based on the publicly available project documents, the project is a Layer 1 PoS protocol on the Polkadot network, focusing on deep learning. It aims to push the boundaries of machine learning by creating a global supercomputing network. This network connects various devices, such as data centers with surplus computing power and individuals with GPUs, custom ASICs, and SoCs.

To address some of the current issues in decentralized computing power, Gensyn draws on some new theoretical research results from academia:

1. It uses probabilistic learning proofs, using metadata based on gradient-based optimization processes to construct proofs for relevant task executions, speeding up the verification process;

2. The Graph-based Pinpoint Protocol (GPP) acts as a bridge, connecting the offline execution of DNNs (Deep Neural Networks) with the smart contract framework on the blockchain, resolving inconsistencies that commonly occur across hardware devices and ensuring consistent verification;

3. Similar to Truebit, it establishes a mechanism for economically rational participants to honestly execute distributed tasks through a combination of staking and punishment. This mechanism utilizes cryptography and game theory methods. This verification system is essential for maintaining the integrity and reliability of large-scale model training computations.

However, it is worth noting that the above content primarily addresses the aspect of task completion verification, rather than the main highlighted features in the project documents about decentralized computing power for model training. In particular, the optimization of parallel computing, distributed hardware communication, synchronization, and other issues. Current frequent inter-node communication influenced by network latency and bandwidth can lead to increased iteration time and communication costs, which not only fail to provide practical optimization but also reduce training efficiency. The methods employed by Gensyn for node communication and parallel computing in model training may involve complex coordination protocols to manage the distributed nature of computations. However, without more detailed technical information or a deeper understanding of their specific methods, the exact mechanism through which Gensyn achieves large-scale model training through its network can only be revealed once the project is launched.

We also take note of the Edge Matrix Computing (EMC) protocol, which applies computing power to scenarios such as AI, rendering, scientific research, and AI e-commerce through blockchain technology. It distributes tasks to different computing power nodes through elastic computing. This approach not only improves the efficiency of computing power utilization but also ensures the security of data transmission. Additionally, it provides a computing power marketplace where users can access and exchange computing resources, facilitating developers in deploying and reaching users faster. By combining with the economic form of Web3, it allows computing power providers to obtain real income based on actual user usage and protocol incentives, while AI developers benefit from lower inference and rendering costs. Below is an overview of its main components and functionalities:

Reshaping the Boundaries of Computing: The Current Situation and Future of Decentralized Computing Power

There are also plans to launch GPU-based RWA products. The key is to unlock the potential of hardware that is typically fixed in data centers and make it available for circulation as RWAs, thereby increasing liquidity. High-quality GPUs can serve as underlying assets for RWAs because computing power is becoming a valuable commodity in the AI field. Currently, there is a clear supply-demand imbalance in this area, and it is not likely to be resolved in the short term, resulting in relatively stable GPU prices.

In addition, deploying IDC data centers to create computing clusters is a crucial part of the EMC protocol’s strategy. Operating GPUs in a unified environment enables more efficient handling of large-scale computing tasks such as pre-training models, which can cater to the needs of professional users. Furthermore, IDC data centers can host and run a large number of GPUs of the same high quality, ensuring technical specifications, and facilitating the packaging of these GPUs as RWA products for the market, opening up new DeFi possibilities.

In recent years, academia has witnessed the development and application of new theoretical frameworks and practices in the field of edge computing. Edge computing, as a supplemental and optimization approach to cloud computing, is seeing a growing number of AI applications shifting from the cloud to smaller IoT devices. These IoT devices are often compact, which has led to a preference for lightweight machine learning models that address issues such as power consumption, latency, and accuracy.

Reshaping the Boundaries of Computing: The Current Situation and Future of Decentralized Computing Power

Network3 provides services to AI developers worldwide by building a dedicated AI Layer2. It optimizes and compresses AI models using AI algorithms, federated learning, edge computing, and privacy computing. By utilizing a large number of intelligent IoT hardware devices, Network3 can focus on small models and provide the corresponding computing power. Furthermore, by constructing a Trusted Execution Environment (TEE), Network3 enables users to complete training-related tasks securely by simply uploading model gradients, thereby protecting user data privacy.

In conclusion:

• With the development of AI and other fields, many industries will undergo significant changes in their underlying logic. Computing power will rise in importance, and all aspects related to it will lead to extensive exploration within the industry. Decentralized computing power networks have their own advantages, as they can reduce centralization risks and serve as a complement to centralized computing power.

• The AI field itself is at a crossroads. Teams can choose whether to use pre-trained large models to build their own products or participate in training large models within their respective regions. This choice is often dialectical. Therefore, decentralized computing power can meet different business needs. Such development trends are encouraging to witness and, along with technological advancements and algorithm iterations, breakthroughs are inevitable in critical areas.

• Fearless, yet slowly plotting.

 

Reference

https://www.semianalysis.com/p/gpt-4-architecture-infrastructure

https://medium.com/render-token/render-network-q2-highlights-part-2-network-statistics-ac5aa6bfa4e5

https://know.rendernetwork.com/

https://medium.com/livepeer-blog/an-overview-of-the-livepeer-network-and-lpt-44985f9321ff

https://mirror.xyz/1kx.eth/q0s9RCH43JCDq8Z2w2Zo6S5SYcFt9ZQaRITzR4G7a_k

https://mirror.xyz/gensyn.eth/_K2v2uuFZdNnsHxVL3Bjrs4GORu3COCMJZJi7_MxByo

https://docs.gensyn.ai/liteLianGuaiper/#solution

https://a16zcrypto.com/posts/announcement/investing-in-gensyn/

https://www.pinecone.io/learn/chunking-strategies/

https://akash.network/blog/the-fast-evolving-ai-landscape/

https://aws.amazon.com/cn/blogs/compute/amazon-ec2-p4d-instances-deep-dive/

https://manual.edgematrix.pro/emc-network/what-is-emc-and-poc

https://arstechnica.com/gaming/2022/09/the-end-of-ethereum-mining-could-be-a-bonanza-for-gpu-shoppers/

We will continue to update Blocking; if you have any questions or suggestions, please contact us!

Share:

Was this article helpful?

93 out of 132 found this helpful

Discover more

Policy

🚀 Bitcoin ETFs Approved: Opening the Floodgates for Cryptocurrency Investment

The asset management industry has been continuously pursuing the launch of a spot bitcoin ETF for over a decade. Ther...

Market

CoinShares reported that Bitcoin ETFs experienced a record $2.4 billion in weekly inflows, led by BlackRock's IBIT.

Last week, there was a significant increase in inflows, signaling a growing demand for the recently introduced spot-b...

Market

Bitcoin Bulls: How High Can It Go?

Lee recognized that the introduction of new spot bitcoin ETFs, the upcoming halving, and anticipated monetary policy ...

Market

Prepare for the Meme Coin Madness: LINDA Token Takes Off!

Fashionistas, get ready to hear about the latest meme coin craze! LINDA Token's value skyrocketed by a staggering 8,0...

Policy

Breaking News SafeMoon Executive Charged and FinCEN Takes Aim at Mixing Services in Crypto Regulation Weekly Digest

Recent developments in crypto regulation include FinCEN cracking down on crypto mixers, clashes between the SEC and l...

Bitcoin

🚀 Ethereum ETFs: A Distant Dream? 🤔

Gensler emphasized that the recent approval of Bitcoin ETFs does not necessarily indicate an immediate approval of Et...