Browse > Article

A Separate Learning Algorithm of Two-Layered Networks with Target Values of Hidden Nodes  

Choi, Bum-Ghi (인하대학교 컴퓨터정보공학과)
Lee, Ju-Hong (인하대학교 컴퓨터정보공학과)
Park, Tae-Su (인하대학교 컴퓨터정보공학과)
Abstract
The Backpropagation learning algorithm is known to have slow and false convergence aroused from plateau and local minima. Many substitutes for backpropagation announced so far appear to pay some trade-off for convergence speed and stability of convergence according to parameters. Here, a new algorithm is proposed, which avoids some of those problems associated with the conventional backpropagation problems, especially with local minima, and gives relatively stable and fast convergence with low storage requirement. This is the separate learning algorithm in which the upper connections, hidden-to-output, and the lower connections, input-to-hidden, separately trained. This algorithm requires less computational work than the conventional backpropagation and other improved algorithms. It is shown in various classification problems to be relatively reliable on the overall performance.
Keywords
backpropagation; separate learning; hidden-nodes; local minima;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Saad, D. and E. Marom., 'Learning by Choice of Internal Representations-An Energy Minimization Approach,' Complex Systems 4, 107-118, 1990
2 Saad, D. and E. Marom., 'Training Feedforward Nets with Binary Weighted via a Modified CHIR Algorithm,' ComplexSystems 4, 573-586, 1990 of nonlinear optimization,' in Proc. 1st. Int . Conf, Neural Networks, vol. II, 619-628, 1987
3 Krogh, A., G.I. Thorbergerson, and J.A. Hertz., 'A Cost Function for Internal Representations,' In Advances in Neural Information Processing Systems II, 1989
4 S. C. Ng and S. H. Leung, 'On Solving the Local Minima Probem of Adaptive Learning by Detrministic Weight Evolutionary Algorithm,' Proc. of Comgress in Evolutionary Computation(CEC2001), Seoul, Korea, May 27-20, 2001, vol. 1, 251-255, 2001   DOI
5 Watrous, R. L., 'Learning algorithms for connectionist network: applied gradient methods
6 Touretzky, D. S., 'San Mateo,' Morgan Kaufmann, 1989
7 Nicholas K. Treadgold and Tamas D. Gedeon., 'Simulaed Annealing and Weight Decay in Adaptive Learning: The SARPRO Algorithm,' IEEE Trans. On Neural Networks, vol. 9, pp. 662-668, 1998   DOI   ScienceOn
8 Grossman, T., 'The CHAIR Algorithm for Feed Forward Networks with Binary Weights,' In Advances Neural Information Processing Systems II, 1989
9 Riedmiller, M. and Braun, H., 'A direct adaptive method for faster backpropagation learning: The RPROP algorithm,' in Pro. Int. Conf, Neural Networks, vol. 1, 586-591, 1993   DOI
10 Montana D. J., Davis L., 'Training feedforward neural networks using genetic algorithms,' in Proc. Int. Joint Conf. Artificial Intelligence, Detroit, 762-767, 1989
11 Vogl, T. P., J.X. Magis, A.K. Rigler, W.T. Zink, and D.L. Alkon., 'Accelerating the Convergence of the Back-Propagation Method,' Biological Cybernetics 59, 257-263, 1988   DOI
12 Rosenblatt, F., 'Principle of Neurodynarnics,' New York: Spartan, 1962
13 Allred, L. G., Kelly, G. E., 'Supervised learning techniques for backpropagation networks,' In Proc. of IJCNN, vol. 1, 702-709, 1990   DOI
14 Fahlman, S. E., 'Fast learning variations on backpropagation: An empirical study,' in Proc. Connectionist Models Summer School, 1989
15 Parker, D.B., 'Learning Logic,' Technical Report TR-47, Center for Computational Research in Economics and Management Science, Massachusetts Institute of Technology, Cambridge, MA, 1985
16 Kolen, J. F. and Pollack, J. B., 'Back Propagation is Sensitive to Initial Conditions,' Complex System 4, 269-280, 1990
17 Jacobs, R. A., 'Increased Rates of Convergence Through Learning Rate Adaptation,' Neural Networks 1, 293-280, 1988   DOI   ScienceOn
18 Minsky, M.L and Papert, S.A., 'Perceptrons,' Cambridge: MIT Press, 1969
19 McCulloch, W.S., Pitts, W., 'A logical Calculus of Ideas Immanent in Nervous Activity,' Bulletin of Mathematical Biophsics 5, 115-133, 1962   DOI
20 Rumelhart, D.E., G.E. Hinton, and Williams, R.J., 'Learning Internal Representations by Error propagation,' In Parallel Distributed Processing, vol. 1, chap8, 1986