Adaptive grey wolf optimizer
Author(s)
Meidani, Kazem
Hemmasian, AmirPouya
Mirjalili, Seyedali
Farimani, Amir Barati
Griffith University Author(s)
Year published
2022
Metadata
Show full item recordAbstract
Swarm-based metaheuristic optimization algorithms have demonstrated outstanding performance on a wide range of optimization problems in both science and industry. Despite their merits, a major limitation of such techniques originates from non-automated parameter tuning and lack of systematic stopping criteria that typically leads to inefficient use of computational resources. In this work, we propose an improved version of grey wolf optimizer (GWO) named adaptive GWO which addresses these issues by adaptive tuning of the exploration/exploitation parameters based on the fitness history of the candidate solutions during the ...
View more >Swarm-based metaheuristic optimization algorithms have demonstrated outstanding performance on a wide range of optimization problems in both science and industry. Despite their merits, a major limitation of such techniques originates from non-automated parameter tuning and lack of systematic stopping criteria that typically leads to inefficient use of computational resources. In this work, we propose an improved version of grey wolf optimizer (GWO) named adaptive GWO which addresses these issues by adaptive tuning of the exploration/exploitation parameters based on the fitness history of the candidate solutions during the optimization. By controlling the stopping criteria based on the significance of fitness improvement in the optimization, AGWO can automatically converge to a sufficiently good optimum in the shortest time. Moreover, we propose an extended adaptive GWO (AGWO Δ) that adjusts the convergence parameters based on a three-point fitness history. In a thorough comparative study, we show that AGWO is a more efficient optimization algorithm than GWO by decreasing the number of iterations required for reaching statistically the same solutions as GWO and outperforming a number of existing GWO variants.
View less >
View more >Swarm-based metaheuristic optimization algorithms have demonstrated outstanding performance on a wide range of optimization problems in both science and industry. Despite their merits, a major limitation of such techniques originates from non-automated parameter tuning and lack of systematic stopping criteria that typically leads to inefficient use of computational resources. In this work, we propose an improved version of grey wolf optimizer (GWO) named adaptive GWO which addresses these issues by adaptive tuning of the exploration/exploitation parameters based on the fitness history of the candidate solutions during the optimization. By controlling the stopping criteria based on the significance of fitness improvement in the optimization, AGWO can automatically converge to a sufficiently good optimum in the shortest time. Moreover, we propose an extended adaptive GWO (AGWO Δ) that adjusts the convergence parameters based on a three-point fitness history. In a thorough comparative study, we show that AGWO is a more efficient optimization algorithm than GWO by decreasing the number of iterations required for reaching statistically the same solutions as GWO and outperforming a number of existing GWO variants.
View less >
Journal Title
Neural Computing and Applications
Note
This publication has been entered as an advanced online version in Griffith Research Online.
Subject
Cognitive and computational psychology
Science & Technology
Computer Science, Artificial Intelligence
Metaheuristic optimization