• Title/Summary/Keyword: 오차 역전파 학습 알고리즘

Search Result 75, Processing Time 0.028 seconds

Realization for FF-PID Controlling System with Backward Propagation Algorithm (역전파 알고리즘을 이용한 FF-PID 제어 시스템 구현)

  • Ryu, Jae-Hoon;Hur, Chang-Wu;Ryu, Kwang-Ryol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.171-174
    • /
    • 2007
  • A realization for FF-PID(Feed-Forward PID) controlling system with backward propagation algorithm and image pattern recognition is presented in this paper. The pattern recognition used backward propagation of nervous network is teaming. FF-PID is enhanced the response characteristic of moving image by using the controlling value which is output error for the target value of nervous system. In conclusion of experiment, the system is shown that the response is worked as 2.7sec that is enhanced round 15% in comparison with general difference image algorithm. The system is able to control a moving object with effect.

  • PDF

Recurrent Networks for Real-time Electrical Transmission (실시간 전기정보 전송을 위한 순환망 알고리즘)

  • Kim, Jong-Man;Kim, Yeong-Min;Kim, Won-Sop;Sin, Dong-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2008.09a
    • /
    • pp.255-257
    • /
    • 2008
  • 초고속 전기정보의 전송 시대와 U-정보전자시대에 응용하는 최신 정보기기와 의료기기 및 군사정보의 실시간 전송을 위하여 많은 실시간 알고리즘과 모델링의 연구가 필수적이다. 또한 원격지에 많은 전기 및 전력정보를 비선형적 특성이 있는 환경하에서도 정보의 오차가 없이 실시간으로 전송하는 기술은 현대 정보사회에서 해결해야 할 매우 중요한 요소중에 하나이다. 이러한 내용으로 수행되어지는 신경회로망을 이용한 실시간 모델링을 제안하고자 한다. 이와 관련한 일반적인 방법으로 역전파 학습 알고리즘을 들 수 있다. 파라미터에도 덜 민감하며, 특히 온라인으로 인식과 제어가 가능하도록 수렴속도를 향상 시켜야하는 새로운 모델의 필요성이 요구된다. 본 연구에서는 기존의 신경회로망이 가지고 있는 여러 단점들을 개선하고자 새로운 학습 알고리즘과 새로운 구조의 신경회로망을 제시한다. 또한 제시한 알고리즘을 이용하여 불규칙적 시스템 모델망과 다양한 센서 모델링 등에 연결하여 다양한 실험을 수행하여 그 결과를 보여 실시간 특성을 갖는 것을 입증해 보였다.

  • PDF

Fast Learning Algorithms for Neural Network Using Tabu Search Method with Random Moves (Random Tabu 탐색법을 이용한 신경회로망의 고속학습알고리즘에 관한 연구)

  • 양보석;신광재;최원호
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.83-91
    • /
    • 1995
  • A neural network with one or more layers of hidden units can be trained using the well-known error back propagation algorithm. According to this algorithm, the synaptic weights of the network are updated during the training by propagating back the error between the expected output and the output provided by the network. However, the error back propagation algorithm is characterized by slow convergence and the time required for training and, in some situation, can be trapped in local minima. A theoretical formulation of a new fast learning method based on tabu search method is presented in this paper. In contrast to the conventional back propagation algorithm which is based solely on the modification of connecting weights of the network by trial and error, the present method involves the calculation of the optimum weights of neural network. The effectiveness and versatility of the present method are verified by the XOR problem. The present method excels in accuracy compared to that of the conventional method of fixed values.

  • PDF

On the set up to the Number of Hidden Node of Adaptive Back Propagation Neural Network (적응 역전파 신경회로망의 은닉 층 노드 수 설정에 관한 연구)

  • Hong, Bong-Wha
    • The Journal of Information Technology
    • /
    • v.5 no.2
    • /
    • pp.55-67
    • /
    • 2002
  • This paper presents an adaptive back propagation algorithm that update the learning parameter by the generated error, adaptively and varies the number of hidden layer node. This algorithm is expected to escaping from the local minimum and make the best environment for convergence to be change the number of hidden layer node. On the simulation tested this algorithm on two learning pattern. One was exclusive-OR learning and the other was $7{\times}5$ dot alphabetic font learning. In both examples, the probability of becoming trapped in local minimum was reduce. Furthermore, in alphabetic font learning, the neural network enhanced to learning efficient about 41.56%~58.28% for the conventional back propagation. and HNAD(Hidden Node Adding and Deleting) algorithm.

  • PDF

Accelerating Levenberg-Marquardt Algorithm using Variable Damping Parameter (가변 감쇠 파라미터를 이용한 Levenberg-Marquardt 알고리즘의 학습 속도 향상)

  • Kwak, Young-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.57-63
    • /
    • 2010
  • The damping parameter of Levenberg-Marquardt algorithm switches between error backpropagation and Gauss-Newton learning and affects learning speed. Fixing the damping parameter induces some oscillation of error and decreases learning speed. Therefore, we propose the way of a variable damping parameter with referring to the alternation of error. The proposed method makes the damping parameter increase if error rate is large and makes it decrease if error rate is small. This method so plays the role of momentum that it can improve learning speed. We tested both iris recognition and wine recognition for this paper. We found out that this method improved learning speed in 67% cases on iris recognition and in 78% cases on wine recognition. It was also showed that the oscillation of error by the proposed way was less than those of other algorithms.

Pattern Recognition by Section Detection Using Speech Word (음성 단어를 이용한 구간검출에 의한 패턴인식)

  • Choi, Jae-Seung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.681-682
    • /
    • 2016
  • 본 논문에서는 화자 식별에서 음성신호의 애매한 점을 보완할 수 있는 신경회로망의 오차역전파학습 알고리즘과 모음구간 검출에 기초하여 입력되는 음성의 화자 패턴을 구분하는 일본어 단어 패턴인식 알고리즘을 제안한다. 제안하는 알고리즘에서는 일본어 데이터베이스로부터의 단어를 사용하여 음성의 특징벡터를 추출하여 분석하고 이러한 음성의 특징벡터의 차이를 이용하여 일본어 화자에 대한 패턴인식 실험을 수행하였다.

  • PDF

Long-term Prediction of Speech Signal Using a Neural Network (신경 회로망을 이용한 음성 신호의 장구간 예측)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.522-530
    • /
    • 2002
  • This paper introduces a neural network (NN) -based nonlinear predictor for the LP (Linear Prediction) residual. To evaluate the effectiveness of the NN-based nonlinear predictor for LP-residual, we first compared the average prediction gain of the linear long-term predictor with that of the NN-based nonlinear long-term predictor. Then, the effects on the quantization noise of the nonlinear prediction residuals were investigated for the NN-based nonlinear predictor A new NN predictor takes into consideration not only prediction error but also quantization effects. To increase robustness against the quantization noise of the nonlinear prediction residual, a constrained back propagation learning algorithm, which satisfies a Kuhn-Tucker inequality condition is proposed. Experimental results indicate that the prediction gain of the proposed NN predictor was not seriously decreased even when the constrained optimization algorithm was employed.

Optimization of the Number of Filter in CNN Noise Attenuator (CNN 잡음감쇠기에서 필터 수의 최적화)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.4
    • /
    • pp.625-632
    • /
    • 2021
  • This paper studies the effect of the number of filters in the CNN (Convolutional Neural Network) layer on the performance of a noise attenuator. Speech is estimated from a noised speech signal using a 64-neuron, 16-kernel CNN filter and an error back-propagation algorithm. In this study, in order to verify the performance of the noise attenuator with respect to the number of filters, a program using Keras library was written and simulation was performed. As a result of simulation, it can be seen that this system has the smallest MSE (Mean Squared Error) and MAE (Mean Absolute Error) values when the number of filters is 16, and the performance is the lowest when there are 4 filters. And when there are more than 8 filters, it was shown that the MSE and MAE values do not differ significantly depending on the number of filters. From these results, it can be seen that about 8 or more filters must be used to express the characteristics of the speech signal.

Research Trend of Cellular Automata in Brain Science Research (뇌과학 연구에서 셀룰라 오토마타의 연구 현황)

  • Kang, Hoon
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.441-447
    • /
    • 1999
  • 본 논문은 복잡 적응 시스템의 분석 및 모델링을 위해, 인공생명의 기본 패러다임인 셀룰라 오토마타를 선택하여, 무정형의 구조를 가지며 투명한 자료 전파 특성을 갖는 셀룰라 신경 회로망의 설계하고 개발하는데 중점을 두었다. 우선, 신경 회로망의 불규칙한 구조를 발생학적으로 다루어 무정형의 은닉층을 생성하고, 다윈의 진화론을 적용하여 구조적 진화 및 선택을 통해 최적화된 신경 회로망을 설계하였다. 주변 셀의 상태를 감지하여 자신의 상태를 수정해나가는 방식의 셀룰라 오토마타의 투명한 신호 전파 모델로 자료 및 오차의 역전파에 적용하도록 고안하였고, 라마르크의 용불용설을 활용한 오차의역전파 학습 알고리즘을 유도하였다. 이러한 복잡 적응계의 학습 과정을 유도하여 시뮬레이션에서 그 타당성을 입증하였다. 시뮬레이션에서는 신경 회로망의 XOR 문제와 다중 입력 다중 출력 함수에 대한 근사화 문제를 풀었다.

  • PDF

Development of Bond Strength Model for FRP Plates Using Back-Propagation Algorithm (역전파 학습 알고리즘을 이용한 콘크리트와 부착된 FRP 판의 부착강도 모델 개발)

  • Park, Do-Kyong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.10 no.2
    • /
    • pp.133-144
    • /
    • 2006
  • In order to catch out such Bond Strength, the preceding researchers had ever examined the Bond Strength of FRP Plate through their experimentations by setting up of various fluent. However, since the experiment for research on such Bond Strength takes much of expenditure for equipment structure and time-consuming, also difficult to carry out, it is conducting limitedly. This Study purposes to develop the most suitable Artificial Neural Network Model by application of various Neural Network Model and Algorithm to the adhering experiment data of the preceding researchers. Output Layer of Artificial Neural Network Model, and Input Layer of Bond Strength were performed the learning by selection as the variable of the thickness, width, adhered length, the modulus of elasticity, tensile strength, and the compressive strength of concrete, tensile strength, width, respectively. The developed Artificial Neural Network Model has applied Back-Propagation, and its error was learnt to be converged within the range of 0.001. Besides, the process for generalization has dissolved the problem of Over-Fitting in the way of more generalized method by introduction of Bayesian Technique. The verification on the developed Model was executed by comparison with the resulted value of Bond Strength made by the other preceding researchers which was never been utilized to the learning as yet.