Proficient in IPFS: one of the system startups

Today, we start to look at the IPFS system from the source code. Below we use Node.JS as an example to explain the source code of IPFS. When we write the following code

const {createNode} = require('ipfs') const node = createNode() 

At the time, although there are only two simple codes, but a very large amount of code is executed internally, let us see how the system is executed and how to initialize the system.

When we execute the createNode function, the function that is actually executed is in the ipfs/core/index.js file. The code is as follows:

 module.exports.createNode = (options) => {  return new IPFS(options) } 

The IPFS object initialized above represents the IPFS system, which is in the same file and inherits from the EventEmitter object. When we initialize the IPFS object, we start executing its constructor, and then we analyze the constructor.

  1. Call the parent class constructor.
  2. Set the environment variables used by the system. First, set the system default options.
     const defaults = {  init: true,  start: true,  EXPERIMENTAL: {},  preload: {    enabled: true,    addresses: [      '/dnsaddr/node0.preload.ipfs.io/https',      '/dnsaddr/node1.preload.ipfs.io/https'    ]  } } 

    Second, verify that the option is valid.

     options = config.validate(options || {}) 

    Next, call the mergeOptions function to merge the default options with user-specified option arguments.

Finally, the init and start options of the processing options are as follows:

 if (options.init === false) {  this._options.init = false } 

If (!(options.start === false)) { this._options.start = true }

  • If a repository is specified in the option and the type is a string, or if no repository is specified, then the repository used by the setup is the default repository; otherwise, the user-specified repository is used.
     if (typeof options.repo === 'string' ||    options.repo === undefined) {  this._repo = defaultRepo(options.repo) } else {  this._repo = options.repo } 

    The default repository definition is located in the runtime/repo-nodejs.js file. The contents of this file are relatively simple, as follows:

     'use strict' 
  • Const os = require('os') const IPFSRepo = require('ipfs-repo') const path = require('path')

    Module.exports = (dir) => { const repoPath = dir || path.join(os.homedir(), '.jsipfs')

    Return new IPFSRepo(repoPath) } Because we did not specify the location of the repository, it defaults to the .jsipfs directory under the user's .jsipfs directory. Below, we look at the IPFSRepo object, which is located in the index.js file of the ipfs-repo project. Its constructor is as follows:

    Constructor (repoPath, options) { assert.strictEqual(typeof repoPath, 'string', 'missing repoPath')

     this.options = buildOptions(options) this.closed = true this.path = repoPath 

    This._locker = this._getLocker()

    This.root = backends.create('root', this.path, this.options) this.version = version(this.root) this.config = config(this.root) this.spec = spec(this.root) this.apiAddr = apiAddr(this.root) }

    In the constructor of the repository, first call the buildOptions function to set the options for the repository. This function combines the user-specified options with the warehouse default options, and handles both the storageBackends and storageBackendOptions options. Because the default repository is only specified when the repository is created, and no other options for the task are specified, the default repository uses the default options. The default options are as follows.

     {  lock: 'fs',  storageBackends: {    root: require('datastore-fs'),    blocks: require('datastore-fs'),    keys: require('datastore-fs'),    datastore: require('datastore-level')  },  storageBackendOptions: {    root: {      extension: ''    },    blocks: {      sharding: true,      extension: '.data'    },    keys: {    }  } } 

    From the default options, we found that the lock uses files, the root directory, the block directory, and the keys directory use normal file system storage, the data store uses a level data system, and the default data file suffix is .data .

    After processing the warehouse options, you can then process the warehouse. Call the function defined in the backends.js file to create the directory of the repository. Create the home directory object according to the option configuration. By default, the home directory is created in ~/.jsipfs , and then the version file, configuration file, storage rules, etc. are created in turn. .

    Note that after these actions are completed, the warehouse object is initially completed, but at this time, other directories and files are not actually created except the home directory, and their actual creation will not be performed until the initialization is started.

  • Next, generate the objects that are needed internally.
     this._peerInfoBook = new PeerBook() this._peerInfo = undefined this._bitswap = undefined 
  • this._blockService = new BlockService(this._repo) this._ipld = new Ipld(ipldOptions(this._blockService, this._options.ipld, this.log)) this._preload = preload(this)

    this._mfsPreload = mfsPreload(this) _bitswap object is actually generated in the start phase. The block service object saves the repository object and the bitswap object. The system processes the specific block by calling the put, get, and delete operations of the block service object. The specific block can be from the local repository. In the middle processing, you can also process specific blocks through the bitswap object. The bitswap object is now empty and will not be generated and set until the system startup phase.

  • Then expand the core components of the system, mainly for initialization, startup, termination, shutdown, etc. of the node.
     this.init = components.init(this) this.preStart = components.preStart(this) this.start = components.start(this) this.stop = components.stop(this) this.shutdown = this.stop this.isOnline = components.isOnline(this) 

    These commands will be explained in detail later, and will not be mentioned here.

  • Then expand some of the interaction-related components, including file-related operations.
     Object.assign(this, components.filesRegular(this)) this.version = components.version(this) this.id = components.id(this) this.repo = components.repo(this) this.bootstrap = components.bootstrap(this) this.config = components.config(this) this.block = components.block(this) this.object = components.object(this) this.dag = components.dag(this) this.files = components.filesMFS(this) this.libp2p = null // assigned on start this.swarm = components.swarm(this) this.name = components.name(this) this.bitswap = components.bitswap(this) this.pin = components.pin(this) this.ping = components.ping(this) this.pingPullStream = components.pingPullStream(this) this.pingReadableStream = components.pingReadableStream(this) this.pubsub = components.pubsub(this) this.dht = components.dht(this) this.dns = components.dns(this) this.key = components.key(this) this.stats = components.stats(this) this.resolve = components.resolve(this) 

    The above two steps can be directly called for the expansion of the system, or can be called by the program. Corresponds to the relevant commands in the command line.

  • Finally, the boot function is called to start the system.
  • At this point, the IPFS node has been started macro, and various commands can be called to play the system.

    We will continue to update Blocking; if you have any questions or suggestions, please contact us!

    Share:

    Was this article helpful?

    93 out of 132 found this helpful

    Discover more

    News

    Exploring MEV Solutions in the BSC Ecosystem BloxRoute and Sentry Node

    Binance researcher RUMEEL.BNB explores two BSC ecological MEV solutions the Flashbots-like solution launched by BloxR...

    News

    Understanding MEV and opportunities for Oracle extractable value

    Redesigning the order process auction to create a sustainable DeFi ecosystem.

    Project

    Further observation on the staking track: What other potential projects are there besides EigenLayer?

    ReStaking not only helps users gain profits but also helps the platform improve its security, especially by promoting...

    Project

    Exploring the middleware Babylon Chain: Inspired by Eigenlayer, borrowing "Bitcoin security" for other POS chains

    From the perspective of modular blockchain and composability, exploring ways to enhance security by leveraging the se...

    Technology

    Mantle Network 20,000-word research report From technical features to token models, in-depth understanding of modular Layer2 new stars

    In this issue, WJB Investment Research takes you on a deep dive to understand the modular Layer2 champion, Mantle Net...

    News

    EigenDA Introduction Ultra-Large-Scale Data Availability for Rollups

    EigenDA is a secure, high-throughput, and decentralized data availability (DA) service built on top of Ethereum using...