• Title/Summary/Keyword: Error-Back Propagation

Search Result 463, Processing Time 0.03 seconds

The speed control of induction motor using neural networks (신경회로망을 이용한 유도전동기 속도제어)

  • 김세찬;원충연
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.45 no.1
    • /
    • pp.42-53
    • /
    • 1996
  • The paper presents a speed control system of vector controlled induct- ion motor using neural networks. The main feature of proposed speed control system is a Neural Network Controller(NNC) which supplies torque current to induction motor and Neural Network Emulator(NNE) which captures the forward dynamics of induction motor. A back propagation training algorithm is employed to train the NNE and NNC. In order to determine the NNC output error, plant(induction motor) output error can be back propagated through the NNE. The NNC and NNE for speed control of vector controlled induction motor is carried out by TMS320C30 DSP and IGBT current regulated PWM inverter. Through computer simulation and experimental results, it is verified that proposed speed control system is robust to the load variation. (author). refs., figs.

  • PDF

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • v.17 no.1
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

Improving Forecast Accuracy of Wind Speed Using Wavelet Transform and Neural Networks

  • Ramesh Babu, N.;Arulmozhivarman, P.
    • Journal of Electrical Engineering and Technology
    • /
    • v.8 no.3
    • /
    • pp.559-564
    • /
    • 2013
  • In this paper a new hybrid forecast method composed of wavelet transform and neural network is proposed to forecast the wind speed more accurately. In the field of wind energy research, accurate forecast of wind speed is a challenging task. This will influence the power system scheduling and the dynamic control of wind turbine. The wind data used here is measured at 15 minute time intervals. The performance is evaluated based on the metrics, namely, mean square error, mean absolute error, sum squared error of the proposed model and compared with the back propagation model. Simulation studies are carried out and it is reported that the proposed model outperforms the compared model based on the metrics used and conclusions were drawn appropriately.

Adaptive Learning Rate and Limited Error Signal to Reduce the Sensitivity of Error Back-Propagation Algorithm on the n-th Order Cross-Entropy Error (오류 역전파 알고리즘의 n차 크로스-엔트로피 오차신호에 대한 민감성 제거를 위한 가변 학습률 및 제한된 오차신호)

  • 오상훈;이수영
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.6
    • /
    • pp.67-75
    • /
    • 1998
  • Although the nCE(n-th order cross-entropy) error function resolves the incorrect saturation problem of conventional EBP(error back-propagation) algorithm, the performance of MLP's (multilayer perceptrons) trained using the nCE function depends heavily on the order of the nCE function. In this paper, we propose an adaptive learning rate to make the performance of MLP's insensitive to the order of the nCE error. Additionally, we propose a limited error signal of output node to prevent unstable learning due to the adaptive learning rate. The effectiveness of the proposed method is demonstrated in simulations of handwritten digit recognition and thyroid diagnosis tasks.

  • PDF

Modified Error Back Propagation Algorithm using the Approximating of the Hidden Nodes in Multi-Layer Perceptron (다층퍼셉트론의 은닉노드 근사화를 이용한 개선된 오류역전파 학습)

  • Kwak, Young-Tae;Lee, young-Gik;Kwon, Oh-Seok
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.603-611
    • /
    • 2001
  • This paper proposes a novel fast layer-by-layer algorithm that has better generalization capability. In the proposed algorithm, the weights of the hidden layer are updated by the target vector of the hidden layer obtained by least squares method. The proposed algorithm improves the learning speed that can occur due to the small magnitude of the gradient vector in the hidden layer. This algorithm was tested in a handwritten digits recognition problem. The learning speed of the proposed algorithm was faster than those of error back propagation algorithm and modified error function algorithm, and similar to those of Ooyen's method and layer-by-layer algorithm. Moreover, the simulation results showed that the proposed algorithm had the best generalization capability among them regardless of the number of hidden nodes. The proposed algorithm has the advantages of the learning speed of layer-by-layer algorithm and the generalization capability of error back propagation algorithm and modified error function algorithm.

  • PDF

The Constrained Least Mean Square Error Method (제한 최소 자승오차법)

  • 나희승;박영진
    • Journal of KSNVE
    • /
    • v.4 no.1
    • /
    • pp.59-69
    • /
    • 1994
  • A new LMS algorithm titled constrained LMS' is proposed for problems with constrained structure. The conventional LMS algorithm can not be used because it destroys the constrained structures of the weights or parameters. Proposed method uses error-back propagation, which is popular in training neural networks, for error minimization. The illustrative examplesare shown to demonstrate the applicability of the proposed algorithm.

  • PDF

Classification of Premature Ventricular Contraction using Error Back-Propagation

  • Jeon, Eunkwang;Jung, Bong-Keun;Nam, Yunyoung;Lee, HwaMin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.988-1001
    • /
    • 2018
  • Arrhythmia has recently emerged as one of the major causes of death in Koreans. Premature Ventricular Contraction (PVC) is the most common arrhythmia that can be found in clinical practice, and it may be a precursor to dangerous arrhythmias, such as paroxysmal insomnia, ventricular fibrillation, and coronary artery disease. Therefore, we need for a method that can detect an abnormal heart beat and diagnose arrhythmia early. We extracted the features corresponding to the QRS pattern from the subject's ECG signal and classify the premature ventricular contraction waveform using the features. We modified the weighting and bias values based on the error back-propagation algorithm through learning data. We classify the normal signal and the premature ventricular contraction signal through the modified weights and deflection values. MIT-BIH arrhythmia data sets were used for performance tests. We used RR interval, QS interval, QR amplitude and RS amplitude features. And the hidden layer with two nodes is composed of two layers to form a total three layers (input layer 0, output layer 3).

Precision Position Control of PMSM using Neural Observer and Parameter Compensator

  • Ko, Jong-Sun;Seo, Young-Ger;Kim, Hyun-Sik
    • Journal of Power Electronics
    • /
    • v.8 no.4
    • /
    • pp.354-362
    • /
    • 2008
  • This paper presents neural load torque compensation method which is composed of a deadbeat load torque observer and gains compensation by a parameter estimator. As a result, the response of the PMSM (permanent magnet synchronous motor) obtains better precision position control. To reduce the noise effect, the post-filter is implemented by a MA (moving average) process. The parameter compensator with an RLSM (recursive least square method) parameter estimator is adopted to increase the performance of the load torque observer and main controller. The parameter estimator is combined with a high performance neural load torque observer to resolve problems. The neural network is trained in online phases and it is composed by a feed forward recall and error back-propagation training. During normal operation, the input-output response is sampled and the weighting value is trained multi-times by the error back-propagation method at each sample period to accommodate the possible variations in the parameters or load torque. As a result, the proposed control system has a robust and precise system against load torque and parameter variation. Stability and usefulness are verified by computer simulation and experiment.

Simple Al Robust Digital Position Control of PMSM using Neural Network Compensator (신경망 보상기를 이용한 PMSM의 간단한 지능형 강인 위치 제어)

  • Ko, Jong-Sun;Youn, Sung-Koo;Lee, Tae-Ho
    • The Transactions of the Korean Institute of Electrical Engineers B
    • /
    • v.49 no.8
    • /
    • pp.557-564
    • /
    • 2000
  • A very simple control approach using neural network for the robust position control of a Permanent Magnet Synchronous Motor(PMSM) is presented. The linear quadratic controller plus feedforward neural network is employed to obtain the robust PMSM system approximately linearized using field-orientation method for an AC servo. The neural network is trained in on-line phases and this neural network is composed by a feedforward recall and error back-propagation training. Since the total number of nodes are only eight, this system can be easily realized by the general microprocessor. During the normal operation, the input-output response is sampled and the weighting value is trained multi-times by error back-propagation method at each sample period to accommodate the possible variations in the parameters or load torque. In addition, the robustness is also obtained without affecting overall system response. This method is realized by a floating-point Digital Signal Processor DS1102 Board (TMS320C31).

  • PDF