• 제목/요약/키워드: error back propagation algorithm

검색결과 318건 처리시간 0.021초

A Simple Approach of Improving Back-Propagation Algorithm

  • Zhu, H.;Eguchi, K.;Tabata, T.;Sun, N.
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -2
    • /
    • pp.1041-1044
    • /
    • 2000
  • The enhancement to the back-propagation algorithm presented in this paper has resulted from the need to extract sparsely connected networks from networks employing product terms. The enhancement works in conjunction with the back-propagation weight update process, so that the actions of weight zeroing and weight stimulation enhance each other. It is shown that the error measure, can also be interpreted as rate of weight change (as opposed to ${\Delta}W_{ij}$), and consequently used to determine when weights have reached a stable state. Weights judged to be stable are then compared to a zero weight threshold. Should they fall below this threshold, then the weight in question is zeroed. Simulation of such a system is shown to return improved learning rates and reduce network connection requirements, with respect to the optimal network solution, trained using the normal back-propagation algorithm for Multi-Layer Perceptron (MLP), Higher Order Neural Network (HONN) and Sigma-Pi networks.

  • PDF

컬러정보와 오류역전파 알고리즘을 이용한 교통표지판 인식 (Traffic Sign Recognition Using Color Information and Error Back Propagation Algorithm)

  • 방걸원;강대욱;조완현
    • 정보처리학회논문지D
    • /
    • 제14D권7호
    • /
    • pp.809-818
    • /
    • 2007
  • 본 논문에서는 컬러정보를 이용하여 교통표지판 영역을 추출하고, 추출된 이미지의 인식을 위해 오류 역전파 학습알고리즘을 적용한 교통표지판 인식시스템을 제안한다. 제안된 방법은 교통표지판의 컬러를 분석하여 영상에서 교통표지판의 후보영역을 추출한다. 후보영역을 추출하는 방법은 RGB 컬러 공간으로부터 YUV, YIQ, CMYK 컬러 공간이 가지는 특성을 이용한다. 형태처리는 교통표지판의 기하학적 특성을 이용하여 영역을 분할하고, 교통표지판 인식은 학습이 가능한 오류역전파 학습알고리즘을 이용하여 인식한다. 실험결과 제안된 시스템은 다양한 크기의 입력영상과 조명의 차이에 영향을 받지 않고 후보영역 추출과 인식에 우수한 성능이 입증되었다.

제한 최소 자승오차법 (The Constrained Least Mean Square Error Method)

  • 나희승;박영진
    • 소음진동
    • /
    • 제4권1호
    • /
    • pp.59-69
    • /
    • 1994
  • A new LMS algorithm titled constrained LMS' is proposed for problems with constrained structure. The conventional LMS algorithm can not be used because it destroys the constrained structures of the weights or parameters. Proposed method uses error-back propagation, which is popular in training neural networks, for error minimization. The illustrative examplesare shown to demonstrate the applicability of the proposed algorithm.

  • PDF

역전파 학습의 오차함수 개선에 의한 다층퍼셉트론의 학습성능 향상 (Improving the Error Back-Propagation Algorithm of Multi-Layer Perceptrons with a Modified Error Function)

  • 오상훈;이영직
    • 전자공학회논문지B
    • /
    • 제32B권6호
    • /
    • pp.922-931
    • /
    • 1995
  • In this paper, we propose a modified error function to improve the EBP(Error Back-Propagation) algorithm of Multi-Layer Perceptrons. Using the modified error function, the output node of MLP generates a strong error signal in the case that the output node is far from the desired value, and generates a weak error signal in the opposite case. This accelerates the learning speed of EBP algorothm in the initial stage and prevents overspecialization for training patterns in the final stage. The effectiveness of our modification is verified through the simulation of handwritten digit recognition.

  • PDF

미소-유전 알고리듬을 이용한 오류 역전파 알고리듬의 학습 속도 개선 방법 (Speeding-up for error back-propagation algorithm using micro-genetic algorithms)

  • 강경운;최영길;심귀보;전홍태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국내학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.853-858
    • /
    • 1993
  • The error back-propagation(BP) algorithm is widely used for finding optimum weights of multi-layer neural networks. However, the critical drawback of the BP algorithm is its slow convergence of error. The major reason for this slow convergence is the premature saturation which is a phenomenon that the error of a neural network stays almost constant for some period time during learning. An inappropriate selections of initial weights cause each neuron to be trapped in the premature saturation state, which brings in slow convergence speed of the multi-layer neural network. In this paper, to overcome the above problem, Micro-Genetic algorithms(.mu.-GAs) which can allow to find the near-optimal values, are used to select the proper weights and slopes of activation function of neurons. The effectiveness of the proposed algorithms will be demonstrated by some computer simulations of two d.o.f planar robot manipulator.

  • PDF

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • 제17권1호
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

Levenberg-Marquardt 알고리즘의 지반공학 적용성 평가 (Evaluation for Applications of the Levenberg-Marquardt Algorithm in Geotechnical Engineering)

  • 김영수;김대만
    • 한국지반환경공학회 논문집
    • /
    • 제10권5호
    • /
    • pp.49-57
    • /
    • 2009
  • 본 연구에서는 Levenberg-Marquardt(LM) 알고리즘 인공신경망을 통하여 지반공학 문제 중의 하나인 압축지수를 예측하였고, 예측된 결과는 현재 지반공학에 널리 사용되고 있는 Back Propagation(BP) 알고리즘 인공신경망의 예측 결과와 비교하여 LM 알고리즘의 지반공학 적용성을 평가하였다. 또한 두 알고리즘에 의한 예측치는 기존에 제안된 압축지수의 경험식들에 의하여 산정된 결과들과 비교를 통하여 예측결과의 정확성을 확인하였다. 경험식에 의한 압축지수의 산정치는 전반적으로 BP 알고리즘과 LM 알고리즘 인공신경망에 의한 예측치에 비하여 더 큰 오차를 나타냈다. LM 알고리즘에 의한 압축지수의 예측치는 BP 알고리즘의 예측치와 비교할 때 정확도는 비슷하나 수렴속도에서 더 좋은 결과를 보여 LM 알고리즘의 지반공학 적용성은 우수한 것으로 나타났다.

  • PDF

저주파 필터 특성을 갖는 다층 구조 신경망을 이용한 시계열 데이터 예측 (Time Series Prediction Using a Multi-layer Neural Network with Low Pass Filter Characteristics)

  • Min-Ho Lee
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제21권1호
    • /
    • pp.66-70
    • /
    • 1997
  • In this paper a new learning algorithm for curvature smoothing and improved generalization for multi-layer neural networks is proposed. To enhance the generalization ability a constraint term of hidden neuron activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. When the total cost consisted of the output error and hidden error is minimized by gradient-descent methods, the additional descent term gives not only the Hebbian learning but also the synaptic weight decay. Therefore it incorporates error back-propagation, Hebbian, and weight decay, and additional computational requirements to the standard error back-propagation is negligible. From the computer simulation of the time series prediction with Santafe competition data it is shown that the proposed learning algorithm gives much better generalization performance.

  • PDF

Self-Relaxation for Multilayer Perceptron

  • Liou, Cheng-Yuan;Chen, Hwann-Txong
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1998년도 The Third Asian Fuzzy Systems Symposium
    • /
    • pp.113-117
    • /
    • 1998
  • We propose a way to show the inherent learning complexity for the multilayer perceptron. We display the solution space and the error surfaces on the input space of a single neuron with two inputs. The evolution of its weights will follow one of the two error surfaces. We observe that when we use the back-propagation(BP) learning algorithm (1), the wight cam not jump to the lower error surface due to the implicit continuity constraint on the changes of weight. The self-relaxation approach is to explicity find out the best combination of all neurons' two error surfaces. The time complexity of training a multilayer perceptron by self-relaxationis exponential to the number of neurons.

  • PDF

적응 역전파 신경회로망의 은닉 층 노드 수 설정에 관한 연구 (On the set up to the Number of Hidden Node of Adaptive Back Propagation Neural Network)

  • 홍봉화
    • 정보학연구
    • /
    • 제5권2호
    • /
    • pp.55-67
    • /
    • 2002
  • 본 논문에서는 학습계수를 발생한 오차에 따라서 적응적으로 갱신할 수 있는 학습알고리즘에 은닉 노드의 수를 다양하게 변화시킬 수 있는 적응 역 전파(Back Propagation) 알고리즘을 제안하였다. 제안한 알고리즘은 국소점을 벗어날 수 있는 것으로 기대되고, 수렴환경에 알맞은 은닉 노드의 수를 설정할 수 있다. 모의실험에서는 두 가지의 학습패턴을 가지고 실험하였다. 하나는 X-OR 문제에 대한 학습과 또 다른 하나는 $7{\times}5$ 도트 영문자 폰트에 에 대한 학습이다. 두 모의실험에서 국소 점으로 안주할 확률은 감소하였다. 또한, 영문자 폰트 학습에서의 신경회로망은 기존의 역 전파 알고리즘과 HNAD 알고리즘에 비하여 약 41.56%~58.28%정도 학습효율이 향상됨을 고찰하였다.

  • PDF