TY - JOUR
T1 - Adaptive grey wolf optimizer
AU - Meidani, Kazem
AU - Hemmasian, Amir Pouya
AU - Mirjalili, Seyedali
AU - Barati Farimani, Amir
N1 - Funding Information:
This work is supported by the start-up fund provided by CMU Mechanical Engineering, USA, and funding from National Science Foundation (CBET–1953222), United States.
Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
PY - 2022
Y1 - 2022
N2 - Swarm-based metaheuristic optimization algorithms have demonstrated outstanding performance on a wide range of optimization problems in both science and industry. Despite their merits, a major limitation of such techniques originates from non-automated parameter tuning and lack of systematic stopping criteria that typically leads to inefficient use of computational resources. In this work, we propose an improved version of grey wolf optimizer (GWO) named adaptive GWO which addresses these issues by adaptive tuning of the exploration/exploitation parameters based on the fitness history of the candidate solutions during the optimization. By controlling the stopping criteria based on the significance of fitness improvement in the optimization, AGWO can automatically converge to a sufficiently good optimum in the shortest time. Moreover, we propose an extended adaptive GWO (AGWO Δ) that adjusts the convergence parameters based on a three-point fitness history. In a thorough comparative study, we show that AGWO is a more efficient optimization algorithm than GWO by decreasing the number of iterations required for reaching statistically the same solutions as GWO and outperforming a number of existing GWO variants.
AB - Swarm-based metaheuristic optimization algorithms have demonstrated outstanding performance on a wide range of optimization problems in both science and industry. Despite their merits, a major limitation of such techniques originates from non-automated parameter tuning and lack of systematic stopping criteria that typically leads to inefficient use of computational resources. In this work, we propose an improved version of grey wolf optimizer (GWO) named adaptive GWO which addresses these issues by adaptive tuning of the exploration/exploitation parameters based on the fitness history of the candidate solutions during the optimization. By controlling the stopping criteria based on the significance of fitness improvement in the optimization, AGWO can automatically converge to a sufficiently good optimum in the shortest time. Moreover, we propose an extended adaptive GWO (AGWO Δ) that adjusts the convergence parameters based on a three-point fitness history. In a thorough comparative study, we show that AGWO is a more efficient optimization algorithm than GWO by decreasing the number of iterations required for reaching statistically the same solutions as GWO and outperforming a number of existing GWO variants.
KW - Adaptive optimization
KW - Fitness-based adaptive algorithm
KW - Grey wolf optimizer
KW - Metaheuristic optimization
UR - http://www.scopus.com/inward/record.url?scp=85122652282&partnerID=8YFLogxK
U2 - 10.1007/s00521-021-06885-9
DO - 10.1007/s00521-021-06885-9
M3 - Article
AN - SCOPUS:85122652282
SN - 0941-0643
JO - Neural Computing and Applications
JF - Neural Computing and Applications
ER -