The Babbitt Accelerator Technology Open Course is a global online Geekhub Global Online program that includes in-depth dialogue and courses. We regularly invite experienced technology makers from around the world to deconstruct blockchain technology online to deliver cutting-edge high-quality blockchain technology content to the Geekhub technology community. Community members can also participate in live interactions to explore blockchains. Technology development and the future.
With the rapid development of cloud computing, big data and artificial intelligence, all kinds of information and data have grown exponentially. At the same time, people have put forward higher requirements for data storage, calling and protection. However, data leakage incidents occur frequently. In the complex data storage environment, the traditional centralized centralized storage problem frequently occurs, and the distributed cloud storage advantage becomes more and more obvious, which is being favored by more and more people.
As a leader in the blockchain distributed cloud storage industry, Storj is committed to providing secure, end-to-end, decentralized cloud storage services to users worldwide. Babbitt Accelerator for Storj Technology Spokesperson, Global Market Leader, Open Source Partner Program Leader Microsoft Azure Blockchain Workbench Team Member and Wake Forest University Financial Technology Club and Blockchain Network Education Founder Kevin Leffew for Blockchain Distributed Cloud Save the special visit, the following content is the actual record of this visit.
- Babbitt Accelerator Technology Open Class | Storj solves distributed cloud storage data problem with erasure code
8BTC Boost: Compared with the copying method, the erasure code still has problems such as increasing system complexity and long-distance downloading due to distance. Why does Storj still use the erasure code scheme? Will it be used for a long time?
Storj: First of all, using the replication system and using the erasure code system are facing the same remote download challenge. But in fact, the problem of erasure code is smaller. If you are downloading a 10/30 erasure code file, you can rebuild it from any of its 30 blocks. This means that the network only needs to request 13-30 segments, and if the transmission is delayed, the extra 3 segments will be compensated. And once the fastest 10 files have been delivered, the rest of the downloads will be cancelled and the files can be rebuilt. In a replication system, the network will at least try to download an extra copy of each file fragment. Otherwise, a file fragment will be delayed during the download and the entire file download will be delayed.
The use of erasure codes greatly reduces the expansion factor of the stored data while maximizing reliability. With the erasure code scheme, we can achieve 119s persistence with an expansion factor of less than 3, which means that for every 1GB of data stored, the network will use less than 3GB of storage. At the same time, on a network using only replication systems, achieving the same reliability requires a factor of 16 (and at the same time, storage nodes on the network will receive 5 times less rewards). Suppose a storage node earns $5 for every 1TB of data uploaded. On a network of erasure codes with a factor of 3, this will be divided into 3 shares, which will receive a $1.66 reward for every 1GB of static data stored. With a replication network, this splits into 16 and each node gets a $0.31 reward. What do you think will happen when running a storage node becomes unprofitable? The storage node will leave the network, causing the overall network maintenance cost to rise and increasing the risk of data loss.
Erasure codes do not link persistence to extensibility. That is, you can adjust the persistence without increasing the overall network load and changing the amount of static data stored on the network! Erasure codes are widely used in distributed storage systems and peer-to-peer storage systems. Although they are more complex, they have their own advantages and disadvantages. The solution we have adopted is Reed-Solomon, which has existed since 1960, from CD, deep space communication, barcode, advanced Raid application (all applications you can think of) ) are in use.
Many products in this domain (Filecoin, MaidSafe, Siacoin, GFS, Ceph, IPFS, etc.) use replication by default, which only means storing multiple copies of data on different nodes. Our network also provides such services! When you do the calculations like we do when running the world's largest distributed cloud storage network, you will find that this is meaningless. 8BTC Boost: Storj Labs is committed to making Storj more decentralized and has released specific technical solutions. Can you elaborate on the technical solutions?
Storj: The V3 Storj network consists of eight core components: storage nodes, peer-to-peer communication and discovery, redundancy, metadata, encryption, auditing and reputation, data repair and payment. We will discuss these components in detail in the V3 white paper.
8BTC Boost: In the white paper, Storj Labs mentioned that a trusted set of Satellites will be used for user and storage node selection. Does Storj have a detailed governance solution for Satellite to prevent Satellite from doing evil or other unexpected situations?
Storj: Although Storj is open source, anyone can run Satellite, but Storj will run a specific set of Satellites, called Tardigrade Network, which will provide customers with a guaranteed level of service for data availability and durability (equivalent to AWS). Agreement (SLA). We really want partners and others to run their own storage nodes because the network is distributed, open source, and the Satellite is free for anyone. However, like all open source technologies, users will be very cautious when running and connecting random Satellites.
8BTC Boost: Storj cooperated with V3 to stop subsidizing storage nodes. Will short-term storage node loss?
Storj: As we mentioned in question 1, from migration to the erasure code model (on the erasure code + replication model), we find that scalability is decreasing compared to storage nodes running on V2 networks. This will greatly increase the expenditure of the V3 storage node. In addition, we have a waiting list of nearly 10,000 members who are interested in joining the network to get Storj incentives, sharing their hard drive capacity and bandwidth. As we introduce our customers into the new V3 network, spending will increase and we are confident that the network will grow rapidly as we need it.
We found that the subsidy node evolved into an unfavorable incentive model in V2. In V2, the storage node starts a large number of nodes in order to obtain the most basic incentives (essentially a witch attack on the network). In V3, we are very aware of the innovative models and economic game theory behind our methods, in order to optimize under adverse conditions.
8BTC Boost: There are four user roles in Storj: end users, storage node operators, demand providers and network operators. How does storj motivate roles so that he can continue to contribute to the network, or how to prevent them from doing evil?
Storj: Unlike centralized centralized solutions like Amazon S3, Storj runs in an untrusted environment where individual storage providers are not necessarily trusted. Storj runs on the public Internet and anyone can register as a storage node. We use the Byzantine, altruistic, and BAR models to discuss the various types of participants in the network. The Byzantine node may violate the original agreement for any reason. Nodes being attacked, nodes actively violating protocols, etc. are all examples of Byzantine node violation protocols. In general, the Byzantine node is a bad participant, or a node optimized for a utility function that is independent of the utility functions provided by the protocol.
Regardless of the inevitable hardware failure, the altruistic node is a good participant, and even if the rational choice is deviated, they will participate in the proposed agreement. The Rational node is a neutral participant and will only participate or deviate if it is in the best interest of their network. Some distributed storage systems (eg, data center-based cloud object storage systems) operate essentially in an environment where all nodes are considered altruistic. Instead, Storj runs on a separate operating environment in each node. In this environment, most of our storage nodes are rational, and only a few are Byzantine (Storj assumes no altruistic nodes). We must have the appropriate incentives to ensure that the rational nodes (most nodes) on the network behave as closely as possible to the expected behavior of the altruistic nodes. Similarly, the effects of Byzantine behavior must be minimized or eliminated.
8BTC Boost: Currently Storj still uses ERC20 as an incentive, then will the follow-up consider using its own native token as an incentive; if not, because of those factors?
Storj: There are many reasons why Storj uses ERC20 tokens. First, using the ERC20 model, the security model of the token is tightly bound to the security model of the Ethereum blockchain, making it very difficult to implement a 51% attack. In addition, wallet vendors, exchanges, DeFI innovators, etc., widely adopt the ERC20 standard, which makes it easier for third parties to adopt Storj and integrate it into their platforms. If you look at the platforms that develop and run your own chains, you'll find that most of their development work is spent supporting, mining, and maintaining their blockchain—and the chain is much safer than Ethereum. First of all, we are a cloud storage company. Storj mainly wants to use blockchain technology as its value transfer method. It has nothing to do with the performance of the blockchain or the performance of the network. However, we are testing Layer 3 solutions such as Raiden and micropayment channels to reduce the cost of running the network.
8BTC Boost: Storj White Paper V3 mentions compatibility with Amazon S3. What kind of considerations do you have for the whole network, and will it bring about the risk of centralization?
Storj: Compatibility with Amazon S3 simply means compatibility with the S3 open API standard. This does not mean that we are using an Amazon data center or other infrastructure. Developers often choose the path with the least resistance to allow other developers, DevOps teams, and other Amazon S3 users to easily migrate to the decentralized platform so they can enjoy the security, performance, and economic benefits of the Storj and Tardigrade networks. To a large extent, we are a competitor to Amazon S3, by building a simple migration through S3's gateway and competing on the basic principles (price, performance, security), which makes the project migrate to our The platform has become simple and attractive.
Live broadcast notice: Storj technology sharing will be carried out on May 8th, welcome everyone to scan the code.
To learn past courses, please click on the live room homepage.
Article content improvements, more blockchain technology content, and overseas projects can be added to WeChat: lanting1217. Welcome everyone to talk.