• Title/Summary/Keyword: a Learning Gain

Search Result 315, Processing Time 0.026 seconds

Estimation of learning gain in iterative learning control using neural networks

  • Choi, Jin-Young;Park, Hyun-Joo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.91-94
    • /
    • 1996
  • This paper presents an approach to estimation of learning gain in iterative learning control for discrete-time affine nonlinear systems. In iterative learning control, to determine learning gain satisfying the convergence condition, we have to know the system model. In the proposed method, the input-output equation of a system is identified by neural network refered to as Piecewise Linearly Trained Network (PLTN). Then from the input-output equation, the learning gain in iterative learning law is estimated. The validity of our method is demonstrated by simulations.

  • PDF

Gain Tuning for SMCSPO of Robot Arm with Q-Learning (Q-Learning을 사용한 로봇팔의 SMCSPO 게인 튜닝)

  • Lee, JinHyeok;Kim, JaeHyung;Lee, MinCheol
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.221-229
    • /
    • 2022
  • Sliding mode control (SMC) is a robust control method to control a robot arm with nonlinear properties. A high switching gain of SMC causes chattering problems, although the SMC allows the adequate control performance by giving high switching gain, without the exact robot model containing nonlinear and uncertainty terms. In order to solve this problem, SMC with sliding perturbation observer (SMCSPO) has been researched, where the method can reduce the chattering by compensating the perturbation, which is estimated by the observer, and then choosing a lower switching control gain of SMC. However, optimal gain tuning is necessary to get a better tracking performance and reducing a chattering. This paper proposes a method that the Q-learning automatically tunes the control gains of SMCSPO with an iterative operation. In this tuning method, the rewards of reinforcement learning (RL) are set minus tracking errors of states, and the action of RL is a change of control gain to maximize rewards whenever the iteration number of movements increases. The simple motion test for a 7-DOF robot arm was simulated in MATLAB program to prove this RL tuning algorithm. The simulation showed that this method can automatically tune the control gains for SMCSPO.

D.C. Motor Speed Control by Learning Gain Regulator (학습이득 조절기에 의한 직류 모터 속도제어)

  • Park, Wal-Seo;Lee, Sung-Su;Kim, Yong-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.19 no.6
    • /
    • pp.82-86
    • /
    • 2005
  • PID controller is widely used as automatic equipment for industry. However when a system has various characters of intermittence or continuance, a new parameter decision for accurate control is a bud task. As a method of solving this problem, in this paper, a teaming gain regulator as PID controller functions is presented. A propriety teaming gain of system is decided by a rule of Delta learning. The function of proposed loaming gain regulator is verified by simulation results of DC motor.

Competitive Learning Neural Network with Binary Reinforcement and Constant Adaptation Gain (일정적응 이득과 이진 강화함수를 갖는 경쟁 학습 신경회로망)

  • Seok, Jin-Wuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.326-328
    • /
    • 1994
  • A modified Kohonen's simple Competitive Learning(SCL) algorithm which has binary reinforcement function and a constant adaptation gain is proposed. In contrast to the time-varing adaptation gain of the original Kohonen's SCL algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SCL due to the constant adaptation gain. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than one of the original SCL.

  • PDF

Self-Organizing Feature Map with Constant Learning Rate and Binary Reinforcement (일정 학습계수와 이진 강화함수를 가진 자기 조직화 형상지도 신경회로망)

  • 조성원;석진욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.180-188
    • /
    • 1995
  • A modified Kohonen's self-organizing feature map (SOFM) algorithm which has binary reinforcement function and a constant learning rate is proposed. In contrast to the time-varing adaptaion gain of the original Kohonen's SOFM algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SOFM due to the constant learning rate. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than that of the original SOFM.

  • PDF

Behavior-based Learning Controller for Mobile Robot using Topological Map (Topolgical Map을 이용한 이동로봇의 행위기반 학습제어기)

  • Yi, Seok-Joo;Moon, Jung-Hyun;Han, Shin;Cho, Young-Jo;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.2834-2836
    • /
    • 2000
  • This paper introduces the behavior-based learning controller for mobile robot using topological map. When the mobile robot navigates to the goal position, it utilizes given information of topological map and its location. Under navigating in unknown environment, the robot classifies its situation using ultrasonic sensor data, and calculates each motor schema multiplied by respective gain for all behaviors, and then takes an action according to the vector sum of all the motor schemas. After an action, the information of the robot's location in given topological map is incorporated to the learning module to adapt the weights of the neural network for gain learning. As a result of simulation, the robot navigates to the goal position successfully after iterative gain learning with topological information.

  • PDF

Fuzzy Gain Scheduling of Velocity PI Controller with Intelligent Learning Algorithm for Reactor Control

  • Kim, Dong-Yun;Seong, Poong-Hyun
    • Proceedings of the Korean Nuclear Society Conference
    • /
    • 1996.11a
    • /
    • pp.73-78
    • /
    • 1996
  • In this study, we proposed a fuzzy gain scheduler with intelligent learning algorithm for a reactor control. In the proposed algorithm, we used the gradient descent method to learn the rule bases of a fuzzy algorithm. These rule bases are learned toward minimizing an objective function, which is called a performance cost function. The objective of fuzzy gain scheduler with intelligent learning algorithm is the generation of adequate gains, which minimize the error of system. The condition of every plant is generally changed as time gose. That is, the initial gains obtained through the analysis of system are no longer suitable for the changed plant. And we need to set new gains, which minimize the error stemmed from changing the condition of a plant. In this paper, we applied this strategy for reactor control of nuclear power plant (NPP), and the results were compared with those of a simple PI controller, which has fixed gains. As a result, it was shown that the proposed algorithm was superior to the simple PI controller.

  • PDF

Vocabulary Acquisition of Korean Learners for Academic Purposes -Focusing on the Effects of Instruction Introductory Methods of Context Inference and Activation of Background Knowledge (학문목적 한국어 학습자의 어휘 습득 연구 -문맥 추론과 배경지식 활성화를 통한 수업 도입을 중심으로-)

  • Lee, MinWoo
    • Journal of Korean language education
    • /
    • v.29 no.4
    • /
    • pp.93-112
    • /
    • 2018
  • The purpose of this study is to deal with vocabulary in KFL. As a result of this study, learners learned vocabulary on average 43 points through contextual inference and introduction of the class to activate background knowledge. In particular, the implicit method showed the highest learning rate of 52 points, and the thematic method had a 41 point-learning rate. In contrast, the semantic method was the lowest with a 25 point-learning rate. There was no significant difference in the improvement rate of upper vocabulary learners, but in the case of the lower learner, there was significant difference in the improvement rate. The difference was not significant in the post-test relative gain rate of upper learners, but there was significant in lower learners. In the delayed test relative gain rate, the difference was significant in all groups. There was correlation between vocabulary difficulty and score, but there was no correlation with the thematic method. And there was no correlation between vocabulary difficulty, improvement rate and relative gain rate in all three classes. However, content understanding, lexical grade, improvement rate, and relative gain rate showed a significant correlation.

A general dynamic iterative learning control scheme with high-gain feedback

  • Kuc, Tae-Yong;Nam, Kwanghee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.1140-1145
    • /
    • 1989
  • A general dynamic iterative learning control scheme is proposed for a class of nonlinear systems. Relying on stabilizing high-gain feedback loop, it is possible to show the existence of Cauchy sequence of feedforward control input error with iteration numbers, which results in a uniform convergance of system state trajectory to the desired one.

  • PDF

TAG neural network model for large-sized optical implementation (대규모 광학적 구현을 위한 TAG 신경회로망 모델)

  • 이혁재
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 1991.06a
    • /
    • pp.35-40
    • /
    • 1991
  • In this paper, a new adaptive learning algorithm, Training by Adaptive Gain (TAG) for optical implementation of large-sized neural networks has been developed and its electro-optical implementation for 2-dimensional input and output neurons has been demostrated. The 4-dimensional global fixed interconnections and 2-dimensional adaptive gain-controls are implemented by multi-facet computer generated holograms and LCTV spatial light modulators, respectively. When the input signals pass through optical system to the output classifying layer, the TAG adaptive learning algorithm is implemented by a personal computer. The system classifies three 5$\times$5 input patterns correctly.

  • PDF