• Title/Summary/Keyword: Error-Back Propagation

Search Result 463, Processing Time 0.052 seconds

Improving the Error Back-Propagation Algorithm of Multi-Layer Perceptrons with a Modified Error Function (역전파 학습의 오차함수 개선에 의한 다층퍼셉트론의 학습성능 향상)

  • 오상훈;이영직
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.922-931
    • /
    • 1995
  • In this paper, we propose a modified error function to improve the EBP(Error Back-Propagation) algorithm of Multi-Layer Perceptrons. Using the modified error function, the output node of MLP generates a strong error signal in the case that the output node is far from the desired value, and generates a weak error signal in the opposite case. This accelerates the learning speed of EBP algorothm in the initial stage and prevents overspecialization for training patterns in the final stage. The effectiveness of our modification is verified through the simulation of handwritten digit recognition.

  • PDF

Improved Error Backpropagation Algorithm using Modified Activation Function Derivative (수정된 Activation Function Derivative를 이용한 오류 역전파 알고리즘의 개선)

  • 권희용;황희영
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.41 no.3
    • /
    • pp.274-280
    • /
    • 1992
  • In this paper, an Improved Error Back Propagation Algorithm is introduced, which avoids Network Paralysis, one of the problems of the Error Backpropagation learning rule. For this purpose, we analyzed the reason for Network Paralysis and modified the Activation Function Derivative of the standard Error Backpropagation Algorithm which is regarded as the cause of the phenomenon. The characteristics of the modified Activation Function Derivative is analyzed. The performance of the modified Error Backpropagation Algorithm is shown to be better than that of the standard Error Back Propagation algorithm by various experiments.

  • PDF

Traffic Sign Recognition Using Color Information and Error Back Propagation Algorithm (컬러정보와 오류역전파 알고리즘을 이용한 교통표지판 인식)

  • Bang, Gul-Won;Kang, Dea-Wook;Cho, Wan-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.14D no.7
    • /
    • pp.809-818
    • /
    • 2007
  • In this thesis, the color information is used to extract the traffic sign territory, and for recognizing the extracted image, it proposes the traffic sign recognition system that applies the error back propagation algorithm. The proposed method analyzes the color of traffic sign to extract and recognize the possible territory of traffic sign. The method of extracting the possible territory is to use the characteristics of YUV, YIQ, and CMYK color space from the RGB color space. Morphology uses the geometric characteristics of traffic sign to make the image segmentation. The recognition of traffic signs can be recognized by using the error back propagation algorithm. As a result of the experiment, the proposed system has proven its outstanding capability in extraction and recognition of candidate territory without the influence of differences in lighting and input image in various sizes.

Time Series Prediction Using a Multi-layer Neural Network with Low Pass Filter Characteristics (저주파 필터 특성을 갖는 다층 구조 신경망을 이용한 시계열 데이터 예측)

  • Min-Ho Lee
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.21 no.1
    • /
    • pp.66-70
    • /
    • 1997
  • In this paper a new learning algorithm for curvature smoothing and improved generalization for multi-layer neural networks is proposed. To enhance the generalization ability a constraint term of hidden neuron activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. When the total cost consisted of the output error and hidden error is minimized by gradient-descent methods, the additional descent term gives not only the Hebbian learning but also the synaptic weight decay. Therefore it incorporates error back-propagation, Hebbian, and weight decay, and additional computational requirements to the standard error back-propagation is negligible. From the computer simulation of the time series prediction with Santafe competition data it is shown that the proposed learning algorithm gives much better generalization performance.

  • PDF

Estimating Regression Function with $\varepsilon-Insensitive$ Supervised Learning Algorithm

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.477-483
    • /
    • 2004
  • One of the major paradigms for supervised learning in neural network community is back-propagation learning. The standard implementations of back-propagation learning are optimal under the assumptions of identical and independent Gaussian noise. In this paper, for regression function estimation, we introduce $\varepsilon-insensitive$ back-propagation learning algorithm, which corresponds to minimizing the least absolute error. We compare this algorithm with support vector machine(SVM), which is another $\varepsilon-insensitive$ supervised learning algorithm and has been very successful in pattern recognition and function estimation problems. For comparison, we consider a more realistic model would allow the noise variance itself to depend on the input variables.

  • PDF

A Simple Approach of Improving Back-Propagation Algorithm

  • Zhu, H.;Eguchi, K.;Tabata, T.;Sun, N.
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1041-1044
    • /
    • 2000
  • The enhancement to the back-propagation algorithm presented in this paper has resulted from the need to extract sparsely connected networks from networks employing product terms. The enhancement works in conjunction with the back-propagation weight update process, so that the actions of weight zeroing and weight stimulation enhance each other. It is shown that the error measure, can also be interpreted as rate of weight change (as opposed to ${\Delta}W_{ij}$), and consequently used to determine when weights have reached a stable state. Weights judged to be stable are then compared to a zero weight threshold. Should they fall below this threshold, then the weight in question is zeroed. Simulation of such a system is shown to return improved learning rates and reduce network connection requirements, with respect to the optimal network solution, trained using the normal back-propagation algorithm for Multi-Layer Perceptron (MLP), Higher Order Neural Network (HONN) and Sigma-Pi networks.

  • PDF

Evaluation of Bearing Capacity on PHC Auger-Drilled Piles Using Artificial Neural Network (인공신경망을 이용한 PHC 매입말뚝의 지지력 평가)

  • Lee, Song;Jang, Joo-Won
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.10 no.6
    • /
    • pp.213-223
    • /
    • 2006
  • In this study, artificial neural network is applied to the evaluation of bearing capacity of the PHC auger-drilled piles at sites of domestic decomposed granite soils. For the verification of applicability of error back propagation neural network, a total of 168 data of in-situ test results for PHC auger-drilled plies are used. The results show that the estimation of error back propagation neural network provide a good matching with pile test results by training and these results show the confidence of utilizing the neural networks for evaluation of the bearing capacity of piles.

Self-Relaxation for Multilayer Perceptron

  • Liou, Cheng-Yuan;Chen, Hwann-Txong
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.113-117
    • /
    • 1998
  • We propose a way to show the inherent learning complexity for the multilayer perceptron. We display the solution space and the error surfaces on the input space of a single neuron with two inputs. The evolution of its weights will follow one of the two error surfaces. We observe that when we use the back-propagation(BP) learning algorithm (1), the wight cam not jump to the lower error surface due to the implicit continuity constraint on the changes of weight. The self-relaxation approach is to explicity find out the best combination of all neurons' two error surfaces. The time complexity of training a multilayer perceptron by self-relaxationis exponential to the number of neurons.

  • PDF

Speeding-up for error back-propagation algorithm using micro-genetic algorithms (미소-유전 알고리듬을 이용한 오류 역전파 알고리듬의 학습 속도 개선 방법)

  • 강경운;최영길;심귀보;전홍태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.853-858
    • /
    • 1993
  • The error back-propagation(BP) algorithm is widely used for finding optimum weights of multi-layer neural networks. However, the critical drawback of the BP algorithm is its slow convergence of error. The major reason for this slow convergence is the premature saturation which is a phenomenon that the error of a neural network stays almost constant for some period time during learning. An inappropriate selections of initial weights cause each neuron to be trapped in the premature saturation state, which brings in slow convergence speed of the multi-layer neural network. In this paper, to overcome the above problem, Micro-Genetic algorithms(.mu.-GAs) which can allow to find the near-optimal values, are used to select the proper weights and slopes of activation function of neurons. The effectiveness of the proposed algorithms will be demonstrated by some computer simulations of two d.o.f planar robot manipulator.

  • PDF

Learning of multi-layer perceptrons with 8-bit data precision (8비트 데이타 정밀도를 가지는 다층퍼셉트론의 역전파 학습 알고리즘)

  • 오상훈;송윤선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.4
    • /
    • pp.209-216
    • /
    • 1996
  • In this paper, we propose a learning method of multi-layer perceptrons (MLPs) with 8-bit data precision. The suggested method uses the cross-entropy cost function to remove the slope term of error signal in output layer. To decrease the possibility of overflows, we use 16-bit weighted sum results into the 8-bit data with appropriate range. In the forwared propagation, the range for bit-conversion is determined using the saturation property of sigmoid function. In the backwared propagation, the range for bit-conversion is derived using the probability density function of back-propagated signal. In a simulation study to classify hadwritten digits in the CEDAR database, our method shows similar generalization performance to the error back-propagation learning with 16-bit precision.

  • PDF