Recently, feedforward neural network (FNN), especially Multi Layer Perceptron (MLP) has become one of the most widely-used computational tools, applied to many fields. Back Propagation is the most common method to learn MLP. This learning algorithm is a gradient-based algorithm, but it suffers some drawbacks such as trapping in local minima and slow convergence. These weaknesses make MLP unreliable in solving real-world problems. Using heuristic optimization algorithms is a popular approach to improve the drawbacks of BP. Magnetic Optimization Algorithm (MOA) is a novel heuristic optimization algorithm, inspired from the magnetic field theory. It has been proven that this algorithm is capable of solving optimization problems quickly and accurately. In this paper, MOA is employed as a new training method for MLP in order to improve the aforementioned shortcomings. The proposed learning method was compared with PSO and GA-based learning algorithms using 3-bit XOR and function approximation benchmark problems. The results prove the high performance of this new learning algorithm for large numbers of training samples.