• Title/Summary/Keyword: back-propagation learning algorithm

Search Result 386, Processing Time 0.022 seconds

Back-Propagation Algorithm through Omitting Redundant Learning (중복 학습 방지에 의한 역전파 학습 알고리듬)

  • 백준호;김유신;손경식
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.9
    • /
    • pp.68-75
    • /
    • 1992
  • In this paper the back-propagation algorithm through omitting redundant learning has been proposed to improve learning speed. The proposed algorithm has been applied to XOR, Parity check and pattern recognition of hand-written numbers. The decrease of the number of patterns to be learned has been confirmed as learning proceeds even in early learning stage. The learning speed in pattern recognition of hand-written numbers is improved more than 2 times in various cases of hidden neuron numbers. It is observed that the improvement of learning speed becomes better as the number of patterns and the number of hidden numbers increase. The recognition rate of the proposed algorithm is nearly the same as that conventional method.

  • PDF

On the Configuration of initial weight value for the Adaptive back propagation neural network (적응 역 전파 신경회로망의 초기 연철강도 설정에 관한 연구)

  • 홍봉화
    • The Journal of Information Technology
    • /
    • v.4 no.1
    • /
    • pp.71-79
    • /
    • 2001
  • This paper presents an adaptive back propagation algorithm that update the learning parameter by the generated error, adaptively and configuration of the range for the initial connecting weight according to the different maximum target value from minimum target value. This algorithm is expected to escaping from the local minimum and make the best environment for the convergence. On the simulation tested this algorithm on three learning pattern. The first was 3-parity problem learning, the second was $7{\times}5$ dot alphabetic font learning and the third was handwritten primitive strokes learning. In three examples, the probability of becoming trapped in local minimum was reduce. Furthermore, in the alphabetic font and handwritten primitive strokes learning, the neural network enhanced to loaming efficient about 27%~57.2% for the standard back propagation(SBP).

  • PDF

A neural network with local weight learning and its application to inverse kinematic robot solution (부분 학습구조의 신경회로와 로보트 역 기구학 해의 응용)

  • 이인숙;오세영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.36-40
    • /
    • 1990
  • Conventional back propagation learning is generally characterized by slow and rather inaccurate learning which makes it difficult to use in control applications. A new multilayer perception architecture and its learning algorithm is proposed that consists of a Kohonen front layer followed by a back propagation network. The Kohonen layer selects a subset of the hidden layer neurons for local tuning. This architecture has been tested on the inverse kinematic solution of robot manipulator while demonstrating its fast and accurate learning capabilities.

  • PDF

Fuzzy neural network modeling using hyper elliptic gaussian membership functions (초타원 가우시안 소속함수를 사용한 퍼지신경망 모델링)

  • 권오국;주영훈;박진배
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.442-445
    • /
    • 1997
  • We present a hybrid self-tuning method of fuzzy inference systems with hyper elliptic Gaussian membership functions using genetic algorithm(GA) and back-propagation algorithm. The proposed self-tuning method has two phases : one is the coarse tuning process based on GA and the other is the fine tuning process based on back-propagation. But the parameters which is obtained by a GA are near optimal solutions. In order to solve the problem in GA applications, it uses a back-propagation algorithm, which is one of learning algorithms in neural networks, to finely tune the parameters obtained by a GA. We provide Box-Jenkins time series to evaluate the advantage and effectiveness of the proposed approach and compare with the conventional method.

  • PDF

Active Control of Sound in a Duct System by Back Propagation Algorithm (역전파 알고리즘에 의한 덕트내 소음의 능동제어)

  • Shin, Joon;Kim, Heung-Seob;Oh, Jae-Eung
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.18 no.9
    • /
    • pp.2265-2271
    • /
    • 1994
  • With the improvement of standard of living, requirement for comfortable and quiet environment has been increased and, therefore, there has been a many researches for active noise reduction to overcome the limit of passive control method. In this study, active noise control is performed in a duct system using intelligent control technique which needs not decide the coefficients of high order filter and the mathematical modeling of a system. Back propagation algorithm is applied as an intelligent control technique and control system is organized to exclude the error microphone and high speed operational device which are indispensable for conventional active noise control techniques. Furthermore, learning is performed by organizing acoustic feedback model, and the effect of the proposed control technique is verified via computer simulation and experiment of active noise control in a duct system.

A Design PID Controller by Neural Network algorithm with Momentum term in Position control system (위치제어계에서 모먼텀 항을 갖는 신경망 알고리듬 의한 PID 제어기 설계)

  • 박광현;허진영;하홍곤
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.380-385
    • /
    • 2001
  • In this paper, in order to get rid of danger trapped Local minimum point, disadvantage of General Back-propagation and simultaneously obtain fast teaming-speed. We propose PID Back-Propagation with Momentum Term(PID-BPMT) and Design PID Controller by Neural Network with Momentum term. Consider to apply for that Controller in position control system by driven D.C servo motor. its useful performance is verified by computer simulation

  • PDF

A Simple Approach of Improving Back-Propagation Algorithm

  • Zhu, H.;Eguchi, K.;Tabata, T.;Sun, N.
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1041-1044
    • /
    • 2000
  • The enhancement to the back-propagation algorithm presented in this paper has resulted from the need to extract sparsely connected networks from networks employing product terms. The enhancement works in conjunction with the back-propagation weight update process, so that the actions of weight zeroing and weight stimulation enhance each other. It is shown that the error measure, can also be interpreted as rate of weight change (as opposed to ${\Delta}W_{ij}$), and consequently used to determine when weights have reached a stable state. Weights judged to be stable are then compared to a zero weight threshold. Should they fall below this threshold, then the weight in question is zeroed. Simulation of such a system is shown to return improved learning rates and reduce network connection requirements, with respect to the optimal network solution, trained using the normal back-propagation algorithm for Multi-Layer Perceptron (MLP), Higher Order Neural Network (HONN) and Sigma-Pi networks.

  • PDF

A Study on Face Recognition using a Hybrid GA-BP Algorithm (혼합된 GA-BP 알고리즘을 이용한 얼굴 인식 연구)

  • Jeon, Ho-Sang;Namgung, Jae-Chan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.552-557
    • /
    • 2000
  • In the paper, we proposed a face recognition method that uses GA-BP(Genetic Algorithm-Back propagation Network) that optimizes initial parameters such as bias values or weights. Each pixel in the picture is used for input of the neuralnetwork. The initial weights of neural network is consist of fixed-point real values and converted to bit string on purpose of using the individuals that arte expressed in the Genetic Algorithm. For the fitness value, we defined the value that shows the lowest error of neural network, which is evaluated using newly defined adaptive re-learning operator and built the optimized and most advanced neural network. Then we made experiments on the face recognition. In comparison with learning convergence speed, the proposed algorithm shows faster convergence speed than solo executed back propagation algorithm and provides better performance, about 2.9% in proposed method than solo executed back propagation algorithm.

  • PDF

Fast Learning Algorithms for Neural Network Using Tabu Search Method with Random Moves (Random Tabu 탐색법을 이용한 신경회로망의 고속학습알고리즘에 관한 연구)

  • 양보석;신광재;최원호
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.83-91
    • /
    • 1995
  • A neural network with one or more layers of hidden units can be trained using the well-known error back propagation algorithm. According to this algorithm, the synaptic weights of the network are updated during the training by propagating back the error between the expected output and the output provided by the network. However, the error back propagation algorithm is characterized by slow convergence and the time required for training and, in some situation, can be trapped in local minima. A theoretical formulation of a new fast learning method based on tabu search method is presented in this paper. In contrast to the conventional back propagation algorithm which is based solely on the modification of connecting weights of the network by trial and error, the present method involves the calculation of the optimum weights of neural network. The effectiveness and versatility of the present method are verified by the XOR problem. The present method excels in accuracy compared to that of the conventional method of fixed values.

  • PDF

A multi-layed neural network learning procedure and generating architecture method for improving neural network learning capability (다층신경망의 학습능력 향상을 위한 학습과정 및 구조설계)

  • 이대식;이종태
    • Korean Management Science Review
    • /
    • v.18 no.2
    • /
    • pp.25-38
    • /
    • 2001
  • The well-known back-propagation algorithm for multi-layered neural network has successfully been applied to pattern c1assification problems with remarkable flexibility. Recently. the multi-layered neural network is used as a powerful data mining tool. Nevertheless, in many cases with complex boundary of classification, the successful learning is not guaranteed and the problems of long learning time and local minimum attraction restrict the field application. In this paper, an Improved learning procedure of multi-layered neural network is proposed. The procedure is based on the generalized delta rule but it is particular in the point that the architecture of network is not fixed but enlarged during learning. That is, the number of hidden nodes or hidden layers are increased to help finding the classification boundary and such procedure is controlled by entropy evaluation. The learning speed and the pattern classification performance are analyzed and compared with the back-propagation algorithm.

  • PDF