Abstract
The learning process of artificial neural networks is considered as one of the most difficult challenges in machine learning and has attracted many researchers recently. The main difficulty of training a neural network is the nonlinear nature and the unknown best set of main controlling parameters (weights and biases). The main disadvantages of the conventional training algorithms are local optima stagnation and slow convergence speed. This makes stochastic optimization algorithm reliable alternative to alleviate these drawbacks. This work proposes a new training algorithm based on the recently proposed whale optimization algorithm (WOA). It has been proved that this algorithm is able to solve a wide range of optimization problems and outperform the current algorithms. This motivated our attempts to benchmark its performance in training feedforward neural networks. For the first time in the literature, a set of 20 datasets with different levels of difficulty are chosen to test the proposed WOA-based trainer. The results are verified by comparisons with back-propagation algorithm and six evolutionary techniques. The qualitative and quantitative results prove that the proposed trainer is able to outperform the cur rent algorithms on the majority of datasets in terms of both local optima avoidance and convergence speed.
Original language | English |
---|---|
Journal | Soft Computing |
Volume | 22 |
Issue number | 1 |
DOIs | |
Publication status | Published - 1 Jan 2018 |
Externally published | Yes |
Keywords
- Evolutionary algorithm
- MLP
- Multilayer perceptron
- Optimization
- Training neural network
- Whale optimization algorithm
- WOA