Browse > Article
http://dx.doi.org/10.9708/jksci.2011.16.8.039

A Layer-by-Layer Learning Algorithm using Correlation Coefficient for Multilayer Perceptrons  

Kwak, Young-Tae (Division of Information Technology, Chonbuk National University)
Abstract
Ergezinger's method, one of the layer-by-layer algorithms used for multilyer perceptrons, consists of an output node and can make premature saturations in the output's weight because of using linear least squared method in the output layer. These saturations are obstacles to learning time and covergence. Therefore, this paper expands Ergezinger's method to be able to use an output vector instead of an output node and introduces a learning rate to improve learning time and convergence. The learning rate is a variable rate that reflects the correlation coefficient between new weight and previous weight while updating hidden's weight. To compare the proposed method with Ergezinger's method, we tested iris recognition and nonlinear approximation. It was found that the proposed method showed better results than Ergezinger's method in learning convergence. In the CPU time considering correlation coefficient computation, the proposed method saved about 35% time than the previous method.
Keywords
Multilayer perceptrons; Layer-by-layer learning; Least squared method; Correlation coefficient;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 S. Ergezinger, and E. Thomsen, "An accelerated lear ning algorithm for multilayer perceptrons optimizati on Layer by Layer," IEEE Trans. on Neural Networks, Vol. 6, No. 1, pp. 31-42, June 1995.   DOI   ScienceOn
2 Rolnald E. Miller, "Optimization," JohnWiley & Son, INC. pp. 358-362, 2000.
3 M. T. Hagan, and M. Menhaj, "Training feedforward net works with the Marquardt algorithm," IEEE Trans. on Neural Networks, Vol. 5, No. 6, pp. 989-993, Nov. 1994.   DOI   ScienceOn
4 Young-Tae Kwak, "Accelerating Levenberg- Marqua rdt Algorithm using Variable Damping Parameter," Journal of The Korea Society of Computer and Information, Vol. 15, No. 4, pp. 57-63, April 2010.   DOI
5 C. Charalambous, "Conjugate gradient algorithmfor efficient training of artificial neural networks," IEEE Proceedings, Vol. 139, No. 3, pp. 301-310, 1992.
6 Wei Chu, Chong Jin Ong, and Keerthi S.S., "An imp roved conjugate gradient scheme to the solution of least squares SVM," IEEE Trans. on Neural Networks, Vol. 16, No. 2, pp. 498-501, March 2005.   DOI   ScienceOn
7 B. Ph. van Milligen, V. Tribaldos, J. A. Jimenez, an d C. Santa Cruz, ''Comments on "An accelerated learning algorithm for multilayer perceptrons optimization layer by layer"," IEEE Trans. on Neural Networks, Vol. 9, No. 2, pp. 339-341, March 1998.   DOI   ScienceOn
8 Jim Y. F. Yam, and Tommy W. S. Chow, ''Extended least squares based algorithms for training feedforward networks," IEEE Trans. on Neural Networks, Vol. 8, No. 3, pp. 806-810, May 1997.   DOI   ScienceOn
9 UCI Machine Learning Repository http://archive.ics.uci.edu/ml/
10 F. Biegler-Kong, and F. Barmann, "Alearning algo rithmfor multilayered neural networks based on linear squares problems," Neural Networks, Vol. 6, pp. 127-131, 1993.   DOI   ScienceOn
11 T. Tollenaere, "SuperSAB: Fast adaptive back propa gation with good scaling properties," Neural Networks, Vol. 3, No. 5, pp. 561-573, 1990.   DOI   ScienceOn
12 D. E. Rumelhart, and J. L. McClelland, Parallel Distributed Processing, MIT Press, Cambridge, MA, pp. 318-362, 1986.
13 Kyong Ho Lee, "A Study on the Implementation of Serious Game Learning Multiplication Table using Back Propagation Neural Network on Divided Interconnection Weights Table," Journal of The Korea Society of Computer and Information, Vol. 14, No. 10, pp. 233-240, Oct. 2009.
14 T. P. Vogal, J. K. Mangis, A. K. Zigler, W. T. Zink ,and D. L. Alkon, "Accelerating the convergence of the backpropa gation method," Biological Cybernetics, Vol. 59, pp. 256-264, Sept. 1988.
15 M. Kordos andW. Duch, "Variable step search algorit hmfor feedforward networks," Neurocomputing, Vol. 71, pp. 2470-2480, April 2008.   DOI   ScienceOn