TY - GEN
T1 - Magnetic Optimization Algorithm for training Multi Layer Perceptron
AU - Mirjalili, Seyedali
AU - Sadiq, Ali Safa
PY - 2011/9/29
Y1 - 2011/9/29
N2 - Recently, feedforward neural network (FNN), especially Multi Layer Perceptron (MLP) has become one of the most widely-used computational tools, applied to many fields. Back Propagation is the most common method to learn MLP. This learning algorithm is a gradient-based algorithm, but it suffers some drawbacks such as trapping in local minima and slow convergence. These weaknesses make MLP unreliable in solving real-world problems. Using heuristic optimization algorithms is a popular approach to improve the drawbacks of BP. Magnetic Optimization Algorithm (MOA) is a novel heuristic optimization algorithm, inspired from the magnetic field theory. It has been proven that this algorithm is capable of solving optimization problems quickly and accurately. In this paper, MOA is employed as a new training method for MLP in order to improve the aforementioned shortcomings. The proposed learning method was compared with PSO and GA-based learning algorithms using 3-bit XOR and function approximation benchmark problems. The results prove the high performance of this new learning algorithm for large numbers of training samples.
AB - Recently, feedforward neural network (FNN), especially Multi Layer Perceptron (MLP) has become one of the most widely-used computational tools, applied to many fields. Back Propagation is the most common method to learn MLP. This learning algorithm is a gradient-based algorithm, but it suffers some drawbacks such as trapping in local minima and slow convergence. These weaknesses make MLP unreliable in solving real-world problems. Using heuristic optimization algorithms is a popular approach to improve the drawbacks of BP. Magnetic Optimization Algorithm (MOA) is a novel heuristic optimization algorithm, inspired from the magnetic field theory. It has been proven that this algorithm is capable of solving optimization problems quickly and accurately. In this paper, MOA is employed as a new training method for MLP in order to improve the aforementioned shortcomings. The proposed learning method was compared with PSO and GA-based learning algorithms using 3-bit XOR and function approximation benchmark problems. The results prove the high performance of this new learning algorithm for large numbers of training samples.
KW - Back Propagation algorithm
KW - BP
KW - Magnetic Optimization Algorithm
KW - MLP
KW - MOA
KW - Multi layer perceptron
UR - http://www.scopus.com/inward/record.url?scp=80053139335&partnerID=8YFLogxK
U2 - 10.1109/ICCSN.2011.6014845
DO - 10.1109/ICCSN.2011.6014845
M3 - Conference contribution
AN - SCOPUS:80053139335
SN - 9781612844855
T3 - 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011
SP - 42
EP - 46
BT - 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011
T2 - 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011
Y2 - 27 May 2011 through 29 May 2011
ER -