Searching a global optimum by stochastic perturbation in error back-propagation algorithm

오류 역전파 학습에서 확률적 가중치 교란에 의한 전역적 최적해의 탐색

  • 김삼근 (안성산업대학교 컴퓨터공학과) ;
  • 민창우 (한국 IBM 소프트웨어연구소) ;
  • 김명원 (숭실대학교 컴퓨터학부)
  • Published : 1998.03.01

Abstract

The Error Back-Propagation(EBP) algorithm is widely applied to train a multi-layer perceptron, which is a neural network model frequently used to solve complex problems such as pattern recognition, adaptive control, and global optimization. However, the EBP is basically a gradient descent method, which may get stuck in a local minimum, leading to failure in finding the globally optimal solution. Moreover, a multi-layer perceptron suffers from locking a systematic determination of the network structure appropriate for a given problem. It is usually the case to determine the number of hidden nodes by trial and error. In this paper, we propose a new algorithm to efficiently train a multi-layer perceptron. OUr algorithm uses stochastic perturbation in the weight space to effectively escape from local minima in multi-layer perceptron learning. Stochastic perturbation probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the probabilistically re-initializes weights associated with hidden nodes to escape a local minimum if the EGP learning gets stuck to it. Addition of new hidden nodes also can be viewed asa special case of stochastic perturbation. Using stochastic perturbation we can solve the local minima problem and the network structure design in a unified way. The results of our experiments with several benchmark test problems including theparity problem, the two-spirals problem, andthe credit-screening data show that our algorithm is very efficient.

Keywords