• Title/Summary/Keyword: Feedback Error Learning

Search Result 108, Processing Time 0.031 seconds

Acoustic Feedback and Noise Cancellation of Hearing Aids by Deep Learning Algorithm (심층학습 알고리즘을 이용한 보청기의 음향궤환 및 잡음 제거)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1249-1256
    • /
    • 2019
  • In this paper, we propose a new algorithm to remove acoustic feedback and noise in hearing aids. Instead of using the conventional FIR structure, this algorithm is a deep learning algorithm using neural network adaptive prediction filter to improve the feedback and noise reduction performance. The feedback canceller first removes the feedback signal from the microphone signal and then removes the noise using the Wiener filter technique. Noise elimination is to estimate the speech from the speech signal containing noise using the linear prediction model according to the periodicity of the speech signal. In order to ensure stable convergence of two adaptive systems in a loop, coefficient updates of the feedback canceller and noise canceller are separated and converged using the residual error signal generated after the cancellation. In order to verify the performance of the feedback and noise canceller proposed in this study, a simulation program was written and simulated. Experimental results show that the proposed deep learning algorithm improves the signal to feedback ratio(: SFR) of about 10 dB in the feedback canceller and the signal to noise ratio enhancement(: SNRE) of about 3 dB in the noise canceller than the conventional FIR structure.

The Influence of Different Quantitative Knowledge of Results on Performance Error During Lumbar Proprioceptive Sensation Training (양적 결과지식의 종류가 요추의 고유수용성감각 훈련에 미치는 영향)

  • Cynn, Won-Suk;Choi, Houng-Sik;Kim, Tack-Hoon;Roh, Jung-Suk;Yi, Jin-Bock
    • Physical Therapy Korea
    • /
    • v.11 no.3
    • /
    • pp.11-18
    • /
    • 2004
  • This study is aimed at investigating the influence of different quantitative knowledge of results on the measurement error during lumbar proprioceptive sensation training. Twenty-eight healthy adult men participated and subjects were randomly assigned into four different feedback groups(100% relative frequency with an angle feedback, 50% relative frequency with an angle feedback, 100% relative frequency with a length feedback, 50% relative frequency with a length feedback). An electrogoniometer was used to determine performance error in an angle, and the Schober test with measurement tape was used to determine performance error in a length. Each subject was asked to maintain an upright position with both eyes closed and both upper limbs stabilized on their pelvis. Lumbar vertebrae flexion was maintained at $30^{\circ}$ for three seconds. Different verbal knowledge of results was provided in four groups. After lumbar flexion was performed, knowledge of results was offered immediately. The resting period between the sessions per block was five seconds. Training consisted of 6 blocks, 10 sessions per one block, with a resting period of one minute. A resting period of five minutes was provided between 3 blocks and 4 blocks. A retention test was performed between 10 minutes and 24 hours later following the training block without providing knowledge of results. To determine the training effects, a two-way analysis of variance and a one-way analysis of variance were used with SPSS Ver. 10.0. A level of significance was set at .05. A significant block effect was shown for the acquisition phase (p<.05), and a significant feedback effect was shown in the immediate retention phase (p>.05). There was a significant feedback effect in the delayed retention phase (p<.05), and a significant block effect in the first acquisition phase and the last retention phase (p<.05). In conclusion, it is determined that a 50% relative frequency with a length feedback is the most efficient feedback among different feedback types.

  • PDF

Iterative Learning Control Algorithm for a class of Nonlinear System with External Inputs (외부입력이 존재하는 비선형 시스템의 반복학습제어 알고리즘에 관한 연구)

  • Jang, H.S.;Lim, M.S.;Lim, J.H.
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1278-1280
    • /
    • 1996
  • In this paper, an Iterative learning control algorithm is presented for a class of non linear system which have external inputs or disturbances. The acceleration of error signal is used to update the next control signal. It is shown that the feedback gain can be deter.ined so that the overall errors are convergent.

  • PDF

EEG Signal Prediction by using State Feedback Real-Time Recurrent Neural Network (상태피드백 실시간 회귀 신경회망을 이용한 EEG 신호 예측)

  • Kim, Taek-Soo
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.1
    • /
    • pp.39-42
    • /
    • 2002
  • For the purpose of modeling EEG signal which has nonstationary and nonlinear dynamic characteristics, this paper propose a state feedback real time recurrent neural network model. The state feedback real time recurrent neural network is structured to have memory structure in the state of hidden layers so that it has arbitrary dynamics and ability to deal with time-varying input through its own temporal operation. For the model test, Mackey-Glass time series is used as a nonlinear dynamic system and the model is applied to the prediction of three types of EEG, alpha wave, beta wave and epileptic EEG. Experimental results show that the performance of the proposed model is better than that of other neural network models which are compared in this paper in some view points of the converging speed in learning stage and normalized mean square error for the test data set.

Precise Control of a Linear Pulse Motor Using Neural Network (신경회로망을 이용한 리니어 펄스 모터의 정밀 제어)

  • Kwon, Young-Kuk;Park, Jung-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.11
    • /
    • pp.987-994
    • /
    • 2000
  • A Linear Pulse Motor (LPM) is a direct drive motor that has good performance in terms of accuracy, velocity and acceleration compared to the conventional rotating system with toothed belts and ball screws. However, since an LPM needs supporting devices which maintain constant air-gap and has strong nonlinearity caused by leakage magnetic flux, friction and cogging, etc., there are many difficulties in improvement on accuracy with conventional control theory. Moreover, when designing the position controller of LPM, the modeling error and load variations has not been considered. In order to compensate these components, the neural network with conventional feedback controller is introduced. This neural network of feedback error learning type changes the current commands to improve position accuracy. As a result of experiments, we observes that more accurate position control is possible compared to conventional controller.

  • PDF

A Learning Controller for Gate Control of Biped Walking Robot using Fourier Series Approximation

  • Lim, Dong-cheol;Kuc, Tae-yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.85.4-85
    • /
    • 2001
  • A learning controller is presented for repetitive walking motion of biped robot. The learning control scheme learns the approximate inverse dynamics input of biped walking robot and uses the learned input pattern to generate an input profile of different walking motion from that learnt. In the learning controller, the PID feedback controller takes part in stabilizing the transient response of robot dynamics while the feedforward learning controller plays a role in computing the desired actuator torques for feedforward nonlinear dynamics compensation in steady state. It is shown that all the error signals in the learning control system are bounded and the robot motion trajectory converges to the desired one asymptotically. The proposed learning control scheme is ...

  • PDF

Iterative learning control for discrete-time feedback systems and its applicationto a direct drive SCARA robot (이산시간 궤환 시스템에 대한 반복학습제어 및 직접구동형 SCARA 로보트에의 응용)

  • Yeo, Seong-Won;Kim, Jae-Oh;Hwang, Gun;Kim, Sung-Hyun;Kim, Do-Hyun;Ahn, Hyun-Sik
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.7
    • /
    • pp.56-65
    • /
    • 1997
  • In this paper, we propose a reference input odification-type iterative learning control law for a class of discrete-time nonlinear systems and prove the convergence of the output error. We can get the high-precision in case of the trajectroy control when the proposed control law is properly combined with a feedback controller, and we can easily implement the learning control law compared to the control input modification-type learning control law. To show the validity and the convergence perfodrmance of the proposed control law, we perform experimentations on the trajectroy control and rejection of periodic disturbance for a 2-axis SCARA-type direct drive robot.

  • PDF

Implementation of Hybrid Neural Network for Improving Learning ability and Its Application to Visual Tracking Control (학습 성능의 개선을 위한 복합형 신경회로망의 구현과 이의 시각 추적 제어에의 적용)

  • 김경민;박중조;박귀태
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.12
    • /
    • pp.1652-1662
    • /
    • 1995
  • In this paper, a hybrid neural network is proposed to improve the learning ability of a neural network. The union of the characteristics of a Self-Organizing Neural Network model and of multi-layer perceptron model using the backpropagation learning method gives us the advantage of reduction of the learning error and the learning time. In learning process, the proposed hybrid neural network reduces the number of nodes in hidden layers to reduce the calculation time. And this proposed neural network uses the fuzzy feedback values, when it updates the responding region of each node in the hidden layer. To show the effectiveness of this proposed hybrid neural network, the boolean function(XOR, 3Bit Parity) and the solution of inverse kinematics are used. Finally, this proposed hybrid neural network is applied to the visual tracking control of a PUMA560 robot, and the result data is presented.

  • PDF

A Case Study of Utilizing Twitter and Moodle for Teaching of Communication Strategies (의사소통 전략 교수를 위한 트위터와 무들 활용 사례 연구)

  • Cho, In Jung
    • Journal of Korean language education
    • /
    • v.25 no.1
    • /
    • pp.203-234
    • /
    • 2014
  • This paper demonstrates how to incorporate the teaching of communication strategies into a large class of English-speaking learners of the Korean language. The method proposed here was developed to overcome the difficulty of conducting language activities involving communicative interactions amongst students and also between teacher and students in a large classroom. As a way of compensating the minimal opportunities for interactions in the classroom, students are given the task of expressing in Korean the English translations of authentic Korean comics via Twitter, which was later replaced with the feedback feature on Moodle, and then their Korean expressions are collected and projected onto a big screen. These collected expressions by students naturally differ from one another, helping students to realize that it is possible for them to express the same message or meaning in many different ways. The results of two separately conducted questionnaires show that this method is an effective way of providing students with significantly increased chances of producing 'comprehensible output' that requires them to think of how to communicate with their limited knowledge of the Korean language. Many students also commented that the teachers' feedback on errors provides them with the opportunity to learn about common errors as well as their own errors.

A Reinforcement Loaming Method using TD-Error in Ant Colony System (개미 집단 시스템에서 TD-오류를 이용한 강화학습 기법)

  • Lee, Seung-Gwan;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.77-82
    • /
    • 2004
  • Reinforcement learning takes reward about selecting action when agent chooses some action and did state transition in Present state. this can be the important subject in reinforcement learning as temporal-credit assignment problems. In this paper, by new meta heuristic method to solve hard combinational optimization problem, examine Ant-Q learning method that is proposed to solve Traveling Salesman Problem (TSP) to approach that is based for population that use positive feedback as well as greedy search. And, suggest Ant-TD reinforcement learning method that apply state transition through diversification strategy to this method and TD-error. We can show through experiments that the reinforcement learning method proposed in this Paper can find out an optimal solution faster than other reinforcement learning method like ACS and Ant-Q learning.