Adaptive Error Constrained Backpropagation Algorithm

적응 오류 제약 Backpropagation 알고리즘

  • Published : 2003.10.01

Abstract

In order to accelerate the convergence speed of the conventional BP algorithm, constrained optimization techniques are applied to the BP algorithm. First, the noise-constrained least mean square algorithm and the zero noise-constrained LMS algorithm are applied (designated the NCBP and ZNCBP algorithms, respectively). These methods involve an important assumption: the filter or the receiver in the NCBP algorithm must know the noise variance. By means of extension and generalization of these algorithms, the authors derive an adaptive error-constrained BP algorithm, in which the error variance is estimated. This is achieved by modifying the error function of the conventional BP algorithm using Lagrangian multipliers. The convergence speeds of the proposed algorithms are 20 to 30 times faster than those of the conventional BP algorithm, and are faster than or almost the same as that achieved with a conventional linear adaptive filter using an LMS algorithm.

Multilayer perceptrons (MLPs)를 위한 일반적인 BP 알고리즘의 학습 속도를 개선하기 위하여 제약을 갖는 최적화 기술을 제안하고 이를 backpropagation (BP) 알고리즘에 적용한다. 먼저 잡음 제약을 갖는 LMS (noise constrained least mean square : NCLMS) 알고리즘과 영잡음 제약 LMS (ZNCLMS) 알고리즘을 BP 알고리즘에 적용한다. 이러한 알고리즘들은 다음과 같은 가정을 반드시 필요로 하여 알고리즘의 이용에 많은 제약을 갖는다. NCLMS 알고리즘을 이용한 NCBP 알고리즘은 정확한 잡음 전력을 알고 있다고 가정한다. 또한 ZNCLMS 알고리즘을 이용한 ZNCBP 알고리즘은 잡음의 전력을 0으로 가정, 즉 잡음을 무시하고 학습을 진행한다. 본 논문에서는 확장된(augmented) Lagrangian multiplier를 이용하여, 비용함수(cost function)를 변형한다. 이를 통하여 잡음에 대한 가정을 제거하고 ZNCBP와 NCBP 알고리즘을 확장, 일반화하여 적응 오류 제약 BP(adaptive error constrained BP : AECBP) 알고리즘을 유도, 제안한다. 제안한 알고리즘들의 수렴 속도는 일반적인 BP 알고리즘보다 약 30배정도 빠른 학습 속도를 나타내었으며, 일반적인 선형 필터와 거의 같은 수렴속도를 나타내었다.

Keywords

References

  1. R. Lippmann, 'An introduction to computing neural nets,' IEEE ASSP Mag., vol. 4, no. 9, PP. 4-22, 1987
  2. S. Hayldn, 'Adaptive Digital Communications Receivers, IEEE Commun. Mag., vol. 38, PP. 106-114, 2000
  3. X. Yu, and et al., 'Dynamic Leaming Rate Optimization of the Backpropagation Algorithm,' IEEE Trans. Neural Networks, vol. 6, PP. 669-677, 1995 https://doi.org/10.1109/72.377972
  4. M. A. Wolfe, Numerical Methods for Unconstrained Optimization, New York Van Nostrand Reinhold, 1978
  5. D. G. Luenberger, Linear and Nontinear Programming, Addison-Wesley, 1989
  6. A. Ooyen, 'Improving the convergence of the back-propagation algorithm,' Neural Networks, vol. 5, pp. 465-471, 1992 https://doi.org/10.1016/0893-6080(92)90008-7
  7. R Battiti, 'Hrst- and second-order methods for learning: Between steepest descent and Newton's method,' Neural Comp., vol. 4, PP. 141-166, 1992 https://doi.org/10.1162/neco.1992.4.2.141
  8. Y. Wei, and et al., 'A Noise Constrained LMS Algonthm for Fading Qiaimel Cotnmunicadons,'Midw. Symp. on Cir. and Sys., PP. 123-126, 1997
  9. J. Proatds, Digitat Communication, McGraw Hill, 1995
  10. M. Hagan, and et al., Neural Network Design, PWS Publishing, 1996
  11. M. Riemiller and H. Braun, 'A direct adapdve method for faaer backpropagation learning: The RPROP algorithm,' IEEE IJCNN'93, PP. 586-591
  12. R. Hetcher and C. Reeves, 'Function minimization by conjugate gradients,' Computer Jounval, vol. 7, PP. 149-154, 1964 https://doi.org/10.1093/comjnl/7.2.149