Ant lion optimizer: Theory, literature review, and application in multi-layer perceptron neural networks

Ali Asghar Heidari, Hossam Faris, Seyedali Mirjalili, Ibrahim Aljarah, Majdi Mafarja

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

110 Citations (Scopus)


This chapter proposes an efficient hybrid training technique (ALOMLP) based on the Ant Lion Optimizer (ALO) to be utilized in dealing with Multi-Layer Perceptrons (MLPs) neural networks. ALO is a well-regarded swarm-based meta-heuristic inspired by the intelligent hunting tricks of antlions in nature. In this chapter, the theoretical backgrounds of ALO are explained in details first. Then, a comprehensive literature review is provided based on recent well-established works from 2015 to 2018. In addition, a convenient encoding scheme is presented and the objective formula is defined, mathematically. The proposed training model based on ALO algorithm is substantiated on sixteen standard datasets. The efficiency of ALO is compared with differential evolution (DE), genetic algorithm (GA), particle swarm optimization (PSO), and population-based incremental learning (PBIL) in terms of best, worst, average, and median accuracies. Furthermore, the convergence propensities are monitored and analyzed for all competitors. The experiments show that the ALOMLP outperforms GA, PBIL, DE, and PSO in classifying the majority of datasets and provides improved accuracy results and convergence rates.

Original languageEnglish
Title of host publicationStudies in Computational Intelligence
Place of PublicationSwitzerland
PublisherSpringer Verlag
Number of pages24
ISBN (Print)978-3-030-12129-7
Publication statusPublished - 1 Jan 2020
Externally publishedYes

Publication series

NameStudies in Computational Intelligence
ISSN (Print)1860-949X


Dive into the research topics of 'Ant lion optimizer: Theory, literature review, and application in multi-layer perceptron neural networks'. Together they form a unique fingerprint.

Cite this