Improved monarch butterfly optimization for unconstrained global search and neural network training

Hossam Faris, Ibrahim Aljarah, Seyedali Mirjalili

Research output: Contribution to journalArticle

35 Citations (Scopus)

Abstract

This work is a seminal attempt to address the drawbacks of the recently proposed monarch butterfly optimization (MBO) algorithm. This algorithm suffers from premature convergence, which makes it less suitable for solving real-world problems. The position updating of MBO is modified to involve previous solutions in addition to the best solution obtained thus far. To prove the efficiency of the Improved MBO (IMBO), a set of 23 well-known test functions is employed. The statistical results show that IMBO benefits from high local optima avoidance and fast convergence speed which helps this algorithm to outperform basic MBO and another recent variant of this algorithm called greedy strategy and self-adaptive crossover operator MBO (GCMBO). The results of the proposed algorithm are compared with nine other approaches in the literature for verification. The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms. To demonstrate the applicability of IMBO at solving challenging practical problems, it is also employed to train neural networks as well. The IMBO-based trainer is tested on 15 popular classification datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. The results are compared to a variety of techniques in the literature including the original MBO and GCMBO. It is observed that IMBO improves the learning of neural networks significantly, proving the merits of this algorithm for solving challenging problems.

Original languageEnglish
Pages (from-to)445-464
Number of pages20
JournalApplied Intelligence
Volume48
Issue number2
DOIs
Publication statusPublished - 1 Feb 2018
Externally publishedYes

    Fingerprint

Keywords

  • Global optimization
  • MBO
  • Multilayer perceptron
  • Neural network
  • Optimization

Cite this