Abstract
This paper employs the recently proposed Grey Wolf Optimizer (GWO) for training Multi-Layer Perceptron (MLP) for the first time. Eight standard datasets including five classification and three function-approximation datasets are utilized to benchmark the performance of the proposed method. For verification, the results are compared with some of the most well-known evolutionary trainers: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Evolution Strategy (ES), and Population-based Incremental Learning (PBIL). The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance. The results also demonstrate a high level of accuracy in classification and approximation of the proposed trainer.
Original language | English |
---|---|
Pages (from-to) | 150-161 |
Number of pages | 12 |
Journal | Applied Intelligence |
Volume | 43 |
Issue number | 1 |
DOIs | |
Publication status | Published - 4 Jul 2015 |
Externally published | Yes |
Keywords
- Evolutionary algorithm
- Grey Wolf optimizer
- Learning neural network
- MLP
- Multi-layer perceptron