Adaptive grey wolf optimizer

Kazem Meidani, Amir Pouya Hemmasian, Seyedali Mirjalili, Amir Barati Farimani

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Swarm-based metaheuristic optimization algorithms have demonstrated outstanding performance on a wide range of optimization problems in both science and industry. Despite their merits, a major limitation of such techniques originates from non-automated parameter tuning and lack of systematic stopping criteria that typically leads to inefficient use of computational resources. In this work, we propose an improved version of grey wolf optimizer (GWO) named adaptive GWO which addresses these issues by adaptive tuning of the exploration/exploitation parameters based on the fitness history of the candidate solutions during the optimization. By controlling the stopping criteria based on the significance of fitness improvement in the optimization, AGWO can automatically converge to a sufficiently good optimum in the shortest time. Moreover, we propose an extended adaptive GWO (AGWO Δ) that adjusts the convergence parameters based on a three-point fitness history. In a thorough comparative study, we show that AGWO is a more efficient optimization algorithm than GWO by decreasing the number of iterations required for reaching statistically the same solutions as GWO and outperforming a number of existing GWO variants.

Original languageEnglish
JournalNeural Computing and Applications
DOIs
Publication statusAccepted/In press - 2022

Keywords

  • Adaptive optimization
  • Fitness-based adaptive algorithm
  • Grey wolf optimizer
  • Metaheuristic optimization

Fingerprint

Dive into the research topics of 'Adaptive grey wolf optimizer'. Together they form a unique fingerprint.

Cite this