Proficient in IPFS: IPFS startup init function

In the previous article, we learned about the boot function during the IPFS boot process. It is like a big master, controlling the entire process of booting the IPFS system. In that article, we simply mentioned that the IPFS boot process is divided into two. The main steps, one is initialization and the other is startup. The init function is used in the initialization process. This function initializes the system and can be started only after the system is fully initialized. init function is located in the core/components/init.js file. Below, enter this file to continue our journey of exploration.

  1. Check if the option is a function, and if so, reset the callback function and option object.
     if (typeof opts === 'function') {  callback = opts  opts = {} } 

    The option parameter of the init function here is specified by us. By default, there is only one bits: 2048 attribute, and another pass attribute depends on the user's specification.

  2. Next, generate the done function variable. The content is as follows, and the analysis is performed later.
     const done = (err, res) => {  if (err) {    self.emit('error', err)    return callback(err)  } 

self.preStart((err) => {
If (err) {
Self.emit('error', err)
Return callback(err)

 self.state.initialized() self.emit('init') callback(null, res) 
   }) } 
  • Call the init method of the IPFS state object to initialize it, skipping it here.
  • If the user specifies a specific warehouse object in the option, the user-specified warehouse object is used. Then call the done function.
     if (opts.repo) {  self._repo = opts.repo  return done(null, true) } 

    By default, the user is not specified, so the code continues to execute.

  • Set some other properties of the option.
     opts.emptyRepo = opts.emptyRepo || false opts.bits = Number(opts.bits) || 2048 opts.log = opts.log || function () {} 
  • Call the mergeOptions method to merge the default configuration with the user-specified configuration. This method has been seen before starting, and I will not mention it here.
  • Next is another waterfall function call. The process in this function is more complicated and important, we need to look at it step by step.
    • First call the exists method of the repository object to check if the repository exists. This method internally only checks if the version file of the repository exists. Next, call the second function.
    • Next, the second function is processed. First, check if the repository returned by the previous function exists. If it exists, throw an exception and end the execution below.
  • Then, check whether the private key is specified in the option. If the private key provided by the user is an object, the private method object is used to directly call the next method; if the private key provided by the user is not an object, the peerId.createFromPrivKey method is called, according to The private key generates a node ID, and then calls the next method with the parameter; if no private key is provided, the peerId.create method is called to generate a random node ID, and then the next method is called with the parameter.

    The specific code is as follows:

       (exists, cb) => {    self.log('repo exists?', exists)    if (exists === true) {      return cb(new Error('repo already exists'))    } 
     if (opts.privateKey) {  self.log('using user-supplied private-key')  if (typeof opts.privateKey === 'object') {    cb(null, opts.privateKey)  } else {    peerId.createFromPrivKey(Buffer.from(opts.privateKey, 'base64'), cb)  } } else {  // Generate peer identity keypair + transform to desired format + add to config.  opts.log( generating ${opts.bits}-bit RSA keypair... , false)  self.log('generating peer id: %s bits', opts.bits)  peerId.create({ bits: opts.bits }, cb) } 
  • Next, the third function is processed. First, set the Identity property of the configuration object based on the generated node ID object.
  • Then, depending on whether the pass attribute is specified, decide whether or not to generate a Keychain . Because this configuration is not specified by default, it will not be generated here.

    Finally, the init method of the repository object is called to initialize the repository.

    The specific code is as follows:

       (peerId, cb) => {    self.log('identity generated')    config.Identity = {      PeerID: peerId.toB58String(),      PrivKey: peerId.privKey.bytes.toString('base64')    }    privateKey = peerId.privKey    if (opts.pass) {      config.Keychain = Keychain.generateOptions()    } 
     // 初始化仓库self._repo.init(config, cb) 

    In the initialization method of the warehouse, the series function is used. This function will call the open method of the main object of the repository and the set object of the configuration object, the specification object, and the version object in order to actually initialize the warehouse. After the execution of these methods, the warehouse is completed. Three files, config, datastore_spec, and version, are generated below the directory.

  • Next, the fourth function is processed. This function is to call the open method of the repository object to open the repository.
     (_, cb) => 

    When this method was called earlier, because the repository has not yet been initialized, there are a lot of processes that have not been executed. This time we will continue to execute these processes.

  • The call to the root object's open method is no different from the previous one, but when the _isInitialized method is called, since the configuration object, the specification object, and the version object already exist, this time the object will not generate the error object, thus executing the next one. The function is not the final callback function, but the next function, the _openLock function. The result of this function execution is that a repo.lock directory is generated in the repository directory, indicating that the current process is not executing, thus not allowing another IPFS process to execute at the same time.

    Below, we take a closer look at the rest of the warehouse open method:

    • Save the file lock object to the repository object. The code is as follows, skip it.
         (lck, cb) => {    log('aquired repo.lock')    this.lockfile = lck    cb()  } 
    • Process data storage and block storage objects. First, call the backends.create method to generate the datastore object, and save it in the same name property of the warehouse object, and generate the datastore directory and the corresponding file under the warehouse directory. This create simple, it is based on the first parameter, from the warehouse's option storageBackends to obtain a directory / file method, and then according to the second parameter specified path created, the third parameter created by the configuration parameters, through These parameters create the specified directory/file under the specified path.

    Then, call the backends.create method to generate the underlying blockstore object, and generate the blocks directory under the repository directory.

    Finally, the blockstore method is called to handle the underlying blockstore object based on the configuration options.

    The specific code is as follows:

       (cb) => {    log('creating datastore,类型为 js-datastore-level')    this.datastore = backends.create('datastore', path.join(this.path, 'datastore'), this.options)    const blocksBaseStore = backends.create('blocks', path.join(this.path, 'blocks'), this.options)    blockstore(      blocksBaseStore,      this.options.storageBackendOptions.blocks,      cb)  } 

    For the configuration options here, please refer to the process of generating a warehouse object mentioned in the previous article.

  • Save the block storage object to the warehouse object. In the last processing of the previous function, the next function is called with the final generated blockstore object as a parameter. Therefore, the blocks parameter here is the final generated blockstore object, which is saved in the repository object.
       (blocks, cb) => {    this.blocks = blocks    cb()  } 
  • Generate a keys object. This function is relatively simple, directly call the backends.create method to generate the keys object, and save it in the same name property of the warehouse object, and generate the keys directory under the warehouse directory.
       (cb) => {    log('creating keystore')    this.keys = backends.create('keys', path.join(this.path, 'keys'), this.options)    cb()  } 
  • Set the warehouse shutdown property. This function sets the closed property of the repository object to false.
  • Eventually, all the business logic of the warehouse open method was executed, all the directories and files already existed, and finally came to its final callback function. Because there is no error in the previous execution, this callback function directly calls the final callback function, so that the execution flow returns to the init function.
  • Next, the fifth function is processed. After the execution of the open method of the repository is completed, the fifth function is processed. Here, different processing is performed depending on whether the user sets a pass . If there is a setting, the _keychain object is generated and saved to the IPFS same name attribute; if not, the next function is called directly.
  • code show as below:

       (cb) => {    if (opts.pass) {      const keychainOptions = Object.assign({ passPhrase: opts.pass }, config.Keychain)      self._keychain = new Keychain(self._repo.keys, keychainOptions)      self._keychain.importPeer('self', { privKey: privateKey }, cb)    } else {      cb(null, true)    }  } 
  • Next, the sixth function is processed. This function is mainly used to generate IPNS objects, which we will cover later, and we will not mention them here. code show as below:
       (_, cb) => {    const offlineDatastore = new OfflineDatastore(self._repo) 
     self._ipns = new IPNS(offlineDatastore, self._repo.datastore, self._peerInfo, self._keychain, self._options) cb(null, true) 
  • Next, the seventh function is processed. This function is mainly used to generate an empty directory object, and save all the files in the init-files/init-docs/ directory to the repository. Specifically how to deal with us below.
     (_, cb) => {    if (opts.emptyRepo) {      return cb(null, true)    }    const tasks = [      (cb) => {        waterfall([          (cb) => DAGNode.create(new UnixFs('directory').marshal(), cb),          (node, cb) => self.dag.put(node, {            version: 0,            format: 'dag-pb',            hashAlg: 'sha2-256'          }, cb),          (cid, cb) => self._ipns.initializeKeyspace(privateKey, cid.toBaseEncodedString(), cb)        ], cb)      }    ] 
     if (typeof addDefaultAssets === 'function') {  tasks.push((cb) => addDefaultAssets(self, opts.log, cb)) } parallel(tasks, (err) => {  if (err) {    cb(err)  } else {    cb(null, true)  } }) 

    In this code, the first task to be executed is to create an empty directory object, and use this directory object to generate a DAGNode, and then call the put method of the dag object of the IPFS object to save the generated DAG node object, the put method is called internally. The IPFS object's ipld method of the same name, the latter calls the blockservice object to save, this object either calls the local repository object to save, or calls bitswap to save, in the initialization phase because the bitswap object has not been generated, so the local warehouse object will be called to save Generated DAGNode.

  • addDefaultAssets variable is defined at the beginning of the file and is a function, so the second task to be executed is this function. The main purpose of this function is to save all the files in the init-files/init-docs/ directory to the repository, so when we finish the initialization, we can see a lot of files in the blocks directory of the repository. These files are saved here. The documents mentioned. The process of saving the file will be explained in detail later, and it will be skipped here.

  • Handle callbacks. After the last function of the waterfall function is processed, that is, after all the tasks in the tasks are executed, the previous one callback function is called. Let's take a look at the contents of this function.
  • This function has two parameters, which respectively represent the errors and results of the previous function execution. When the execution is successful, the preStart method of the IPFS object is preStart to perform pre-start. After the pre-start is successful, the final callback method is called to execute the process. Go back to the boot function and start executing the system startup method to start the system.

    Pre-launch and start these two methods, we will stay in the next article for detailed explanation. Through the above combing, we can find that the init function is to initialize the system, including initializing/generating the repository, generating the node ID, saving the init-docs document, calling the pre-start/start method, etc. This method is generally It is to prepare everything needed to start the system, and then officially start.

    Author: white zone