Browse > Article
http://dx.doi.org/10.6109/jicce.2015.13.2.123

Learning an Artificial Neural Network Using Dynamic Particle Swarm Optimization-Backpropagation: Empirical Evaluation and Comparison  

Devi, Swagatika (Department of Computer Science and Engineering, Siksha 'O'Anusandhan University)
Jagadev, Alok Kumar (Departmenting, Siksha 'O'Anusandhan University)
Patnaik, Srikanta (Department of Computer Science and Engineering, Siksha 'O'Anusandhan University)
Abstract
Training neural networks is a complex task with great importance in the field of supervised learning. In the training process, a set of input-output patterns is repeated to an artificial neural network (ANN). From those patterns weights of all the interconnections between neurons are adjusted until the specified input yields the desired output. In this paper, a new hybrid algorithm is proposed for global optimization of connection weights in an ANN. Dynamic swarms are shown to converge rapidly during the initial stages of a global search, but around the global optimum, the search process becomes very slow. In contrast, the gradient descent method can achieve faster convergence speed around the global optimum, and at the same time, the convergence accuracy can be relatively high. Therefore, the proposed hybrid algorithm combines the dynamic particle swarm optimization (DPSO) algorithm with the backpropagation (BP) algorithm, also referred to as the DPSO-BP algorithm, to train the weights of an ANN. In this paper, we intend to show the superiority (time performance and quality of solution) of the proposed hybrid algorithm (DPSO-BP) over other more standard algorithms in neural network training. The algorithms are compared using two different datasets, and the results are simulated.
Keywords
ANN; BP algorithm; DPSO; Global optimization; Gradient descent technique;
Citations & Related Records
연도 인용수 순위
  • Reference
1 F. Van den Bergh, and A. P. Engelbrecht, “A cooperative approach to particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225-239, 2004.   DOI   ScienceOn
2 K. S. Tang, C. Y. Chan, K. F. Man, and S. Kwong, “Genetic structure for NN topology and weights optimization,” in Proceedings of the 1st International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA), Sheffield, UK, pp. 250-255, 1995.
3 P. J. Angeline, G. M. Saunders, and J. B. Pollack, “An evolutionary algorithm that constructs recurrent neural networks,” IEEE Transactions on Neural Networks, vol. 5, no. 1, pp. 54-65, 1994.   DOI   ScienceOn
4 J. Kennedy and R. C. Eberhart, Swarm Intelligence. San Francisco, CA: Morgan Kaufmann, 2001.
5 M. Gori and A. Tesi, “On the problem of local minima in backpropagation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 1, pp. 76-86, 1992.   DOI   ScienceOn
6 M. K. Weir, “A method for self-determination of adaptive learning rates in back propagation,” Neural Networks, vol. 4, no. 3, pp. 371-379, 1991.   DOI   ScienceOn
7 Y. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in Proceedings of IEEE World Congress on Computational Intelligence, Anchorage, AK, pp. 69-73, 1998.
8 J. Salerno, “Using the particle swarm optimization technique to train a recurrent neural model,” in Proceedings of IEEE 9th International Conference on Tools with Artificial Intelligence, Newport Beach, CA, pp. 45-49, 1997.
9 J. Kennedy and R. C. Eberhart, “A discrete binary version of the particle swarm algorithm,” in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, Orlando, FL, pp. 4104-4108, 1997.
10 A. Abraham and B. Nath, "ALEC: an adaptive learning framework for optimizing artificial neural networks," in Computational Science-ICCS 2001. Heidelberg: Springer, pp. 171-180, 2001.
11 H. V. Gupta, K. L. Hsu, and S. Sorooshian, “Superior training of artificial neural networks using weight-space partitioning,” in Proceedings of International Conference on Neural Networks, Houston, TX, pp. 1919-1923, 1997.
12 S. Ergezinger and E. Thomsen, “An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer,” IEEE Transactions on Neural Networks, vol. 6, no. 1, pp. 31-42, 1995.   DOI   ScienceOn
13 O. L. Mangasarian, “Mathematical programming in neural networks,” ORSA Journal on Computing, vol. 5, no. 4, pp. 349-360, 1993.   DOI   ScienceOn
14 C. M. Kuan and K. Hornik, “Convergence of learning algorithms with constant learning rates,” IEEE Transactions on Neural Networks, vol. 2, no. 5, pp. 484-489, 1991.   DOI   ScienceOn