• Title/Summary/Keyword: Error Backpropagation Algorithm

Search Result 88, Processing Time 0.032 seconds

Performance of Adaptive Correlator using Recursive Least Square Backpropagation Neural Network in DS/SS Mobile Communication Systems (DS/SS 이동 통신에서 반복적 최소 자승 역전파 신경망을 이용한 적응 상관기)

  • Jeong, Woo-Yeol;Kim, Hwan-Yong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.79-84
    • /
    • 1996
  • In this paper, adaptive correlator model using backpropagation neural network based on complex multilayer perceptron is presented for suppressing interference of narrow-band of direct sequence spread spectrum receiver in CDMA mobile communication systems. Recursive least square backpropagation algorithm with backpropagation error is used for fast convergence and better performance in adaptive correlator scheme. According to signal noise ratio and transmission power ratio, computer simulation results show that bit error ratio of adaptive correlator uswing backpropagation neural network improved than that of adaptive transversal filter of direct sequence spread spectrum considering of co-channel and narrow-band interference. Bit error ratio of adaptive correlator using backpropagation neural network is reduced about $10^{-1}$ than that of adaptive transversal filter where interference versus signal ratio is 5 dB.

  • PDF

Blending Precess Optimization using Fuzzy Set Theory an Neural Networks (퍼지 및 신경망을 이용한 Blending Process의 최적화)

  • 황인창;김정남;주관정
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1993.10a
    • /
    • pp.488-492
    • /
    • 1993
  • This paper proposes a new approach to the optimization method of a blending process with neural network. The method is based on the error backpropagation learning algorithm for neural network. Since the neural network can model an arbitrary nonlinear mapping, it is used as a system solver. A fuzzy membership function is used in parallel with the neural network to minimize the difference between measurement value and input value of neural network. As a result, we can guarantee the reliability and stability of blending process by the help of neural network and fuzzy membership function.

  • PDF

Direct Adaptive Control Based on Neural Networks Using An Adaptive Backpropagation Algorithm (적응 역전파 학습 알고리즘을 이용한 신경회로망 제어기 설계)

  • Choi, Kyoung-Mi;Choi, Yoon-Ho;Park, Jin-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1730-1731
    • /
    • 2007
  • In this paper, we present a direct adaptive control method using neural networks for the control of nonlinear systems. The weights of neural networks are trained by an adaptive backpropagation algorithm based on Lyapunov stability theory. We develop the parameter update-laws using the neural network input and the error between the desired output and the output of nonlinear plant to update the weights of a neural network in the sense that Lyapunove stability theory. Beside the output tracking error is asymptotically converged to zero.

  • PDF

A Method on the Learning Speed Improvement of the Online Error Backpropagation Algorithm in Speech Processing (음성처리에서 온라인 오류역전파 알고리즘의 학습속도 향상방법)

  • 이태승;이백영;황병원
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.5
    • /
    • pp.430-437
    • /
    • 2002
  • Having a variety of good characteristics against other pattern recognition techniques, the multilayer perceptron (MLP) has been widely used in speech recognition and speaker recognition. But, it is known that the error backpropagation (EBP) algorithm that MLP uses in learning has the defect that requires restricts long learning time, and it restricts severely the applications like speaker recognition and speaker adaptation requiring real time processing. Because the learning data for pattern recognition contain high redundancy, in order to increase the learning speed it is very effective to use the online-based learning methods, which update the weight vector of the MLP by the pattern. A typical online EBP algorithm applies the fixed learning rate for each update of the weight vector. Though a large amount of speedup with the online EBP can be obtained by choosing the appropriate fixed rate, firing the rate leads to the problem that the algorithm cannot respond effectively to different learning phases as the phases change and the number of patterns contributing to learning decreases. To solve this problem, this paper proposes a Changing rate and Omitting patterns in Instant Learning (COIL) method to apply the variable rate and the only patterns necessary to the learning phase when the phases come to change. In this paper, experimentations are conducted for speaker verification and speech recognition, and results are presented to verify the performance of the COIL.

Constructing Neural Networks Using Genetic Algorithm and Learning Neural Networks Using Various Learning Algorithms (유전알고리즘을 이용한 신경망의 구성 및 다양한 학습 알고리즘을 이용한 신경망의 학습)

  • 양영순;한상민
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 1998.04a
    • /
    • pp.216-225
    • /
    • 1998
  • Although artificial neural network based on backpropagation algorithm is an excellent system simulator, it has still unsolved problems of its structure-decision and learning method. That is, we cannot find a general approach to decide the structure of the neural network and cannot train it satisfactorily because of the local optimum point which it frequently falls into. In addition, although there are many successful applications using backpropagation learning algorithm, there are few efforts to improve the learning algorithm itself. In this study, we suggest a general way to construct the hidden layer of the neural network using binary genetic algorithm and also propose the various learning methods by which the global minimum value of the teaming error can be obtained. A XOR problem and line heating problems are investigated as examples.

  • PDF

Fuzzy Supervised Learning Algorithm by using Self-generation (Self-generation을 이용한 퍼지 지도 학습 알고리즘)

  • 김광백
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.7
    • /
    • pp.1312-1320
    • /
    • 2003
  • In this paper, we consider a multilayer neural network, with a single hidden layer. Error backpropagation learning method used widely in multilayer neural networks has a possibility of local minima due to the inadequate weights and the insufficient number of hidden nodes. So we propose a fuzzy supervised learning algorithm by using self-generation that self-generates hidden nodes by the compound fuzzy single layer perceptron and modified ART1. From the input layer to hidden layer, a modified ART1 is used to produce nodes. And winner take-all method is adopted to the connection weight adaptation, so that a stored pattern for some pattern gets updated. The proposed method has applied to the student identification card images. In simulation results, the proposed method reduces a possibility of local minima and improves learning speed and paralysis than the conventional error backpropagation learning algorithm.

  • PDF

Identification of suspension systems using error self recurrent neural network and development of sliding mode controller (오차 자기 순환 신경회로망을 이용한 현가시스템 인식과 슬라이딩 모드 제어기 개발)

  • 송광현;이창구;김성중
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.625-628
    • /
    • 1997
  • In this paper the new neural network and sliding mode suspension controller is proposed. That neural network is error self-recurrent neural network. For fast on-line learning, this paper use recursive least squares method. A new neural networks converges considerably faster than the backpropagation algorithm and has advantages of being less affected by the poor initial weights and learning rate. The controller for suspension systems is designed according to sliding mode technique based on new proposed neural network.

  • PDF

A Fast-Loaming Algorithm for MLP in Pattern Recognition (패턴인식의 MLP 고속학습 알고리즘)

  • Lee, Tae-Seung;Choi, Ho-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.3
    • /
    • pp.344-355
    • /
    • 2002
  • Having a variety of good characteristics against other pattern recognition techniques, Multilayer Perceptron (MLP) has been used in wide applications. But, it is known that Error Backpropagation (EBP) algorithm which MLP uses in learning has a defect that requires relatively long leaning time. Because learning data in pattern recognition contain abundant redundancies, in order to increase learning speed it is very effective to use online-based teaming methods, which update parameters of MLP pattern by pattern. Typical online EBP algorithm applies fixed learning rate for each update of parameters. Though a large amount of speedup with online EBP can be obtained by choosing an appropriate fixed rate, fixing the rate leads to the problem that the algorithm cannot respond effectively to different leaning phases as the phases change and the learning pattern areas vary. To solve this problem, this paper defines learning as three phases and proposes a Instant Learning by Varying Rate and Skipping (ILVRS) method to reflect only necessary patterns when learning phases change. The basic concept of ILVRS is as follows. To discriminate and use necessary patterns which change as learning proceeds, (1) ILVRS uses a variable learning rate which is an error calculated from each pattern and is suppressed within a proper range, and (2) ILVRS bypasses unnecessary patterns in loaming phases. In this paper, an experimentation is conducted for speaker verification as an application of pattern recognition, and the results are presented to verify the performance of ILVRS.

Adaptive Control of Nonlinear Systems through Improvement of Learning Speed of Neural Networks and Compensation of Control Inputs (신경망의 학습속도 개선 및 제어입력 보상을 통한 비선형 시스템의 적응제어)

  • 배병우;전기준
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.43 no.6
    • /
    • pp.991-1000
    • /
    • 1994
  • To control nonlinear systems adaptively, we improve learning speed of neural networks and present a novel control algorithm characterized by compensation of control inputs. In an error-backpropagation algorithm for tranining multilayer neural networks(MLNN's) the effect of the slope of activation functions on learning performance is investigated and the learning speed of neural networks is improved by auto-adjusting the slope of activation functions. The control system is composed of two MLNN's, one for control and the other for identification, with the weights initialized by off-line training. The control algoritm is modified by a control strategy which compensates the control error induced by the indentification error. Computer simulations show that the proposed control algorithm is efficient in controlling a nonlinear system with abruptly changing parameters.

A Study on the Prediction of the Loaded Location of the Composite Laminated Shell by Using Neural Networks (신경회로망을 이용한 복합재료 원통쉘의 하중특성 추론에 관한 연구)

  • 명창문;이영신;류충현
    • Composites Research
    • /
    • v.14 no.5
    • /
    • pp.26-37
    • /
    • 2001
  • After impact analysis of the composite cylindrical shells was performed. obtained outputs at 9 equally divided points of the shell were used as input patterns of the neural networks. Identification of impact loading characteristics was predicted simultaneously. Momentum backpropagation algorithm of neural networks which can modify the momentum coefficient and learning rate was developed and applied to identify the loading characteristics. Hidden layers of the backpropagation increased from 1 layer to 3 layers and trained the loading characteristics. Developed program with variable learning rate was converged close to real load characteristics under 1% error. Inverse engineering which identify the impact loading characteristics can be applicable to the composite laminated cylindrical shells with developed neural networks.

  • PDF