How effective is the Grey Wolf optimizer in training multi-layer perceptrons

Research output: Contribution to journalArticle

221 Citations (Scopus)

Abstract

This paper employs the recently proposed Grey Wolf Optimizer (GWO) for training Multi-Layer Perceptron (MLP) for the first time. Eight standard datasets including five classification and three function-approximation datasets are utilized to benchmark the performance of the proposed method. For verification, the results are compared with some of the most well-known evolutionary trainers: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Evolution Strategy (ES), and Population-based Incremental Learning (PBIL). The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance. The results also demonstrate a high level of accuracy in classification and approximation of the proposed trainer.

Original languageEnglish
Pages (from-to)150-161
Number of pages12
JournalApplied Intelligence
Volume43
Issue number1
DOIs
Publication statusPublished - 4 Jul 2015
Externally publishedYes

    Fingerprint

Keywords

  • Evolutionary algorithm
  • Grey Wolf optimizer
  • Learning neural network
  • MLP
  • Multi-layer perceptron

Cite this