• Title/Summary/Keyword: Learning control gain

Search Result 86, Processing Time 0.027 seconds

Estimation of learning gain in iterative learning control using neural networks

  • Choi, Jin-Young;Park, Hyun-Joo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.91-94
    • /
    • 1996
  • This paper presents an approach to estimation of learning gain in iterative learning control for discrete-time affine nonlinear systems. In iterative learning control, to determine learning gain satisfying the convergence condition, we have to know the system model. In the proposed method, the input-output equation of a system is identified by neural network refered to as Piecewise Linearly Trained Network (PLTN). Then from the input-output equation, the learning gain in iterative learning law is estimated. The validity of our method is demonstrated by simulations.

  • PDF

Gain Tuning for SMCSPO of Robot Arm with Q-Learning (Q-Learning을 사용한 로봇팔의 SMCSPO 게인 튜닝)

  • Lee, JinHyeok;Kim, JaeHyung;Lee, MinCheol
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.2
    • /
    • pp.221-229
    • /
    • 2022
  • Sliding mode control (SMC) is a robust control method to control a robot arm with nonlinear properties. A high switching gain of SMC causes chattering problems, although the SMC allows the adequate control performance by giving high switching gain, without the exact robot model containing nonlinear and uncertainty terms. In order to solve this problem, SMC with sliding perturbation observer (SMCSPO) has been researched, where the method can reduce the chattering by compensating the perturbation, which is estimated by the observer, and then choosing a lower switching control gain of SMC. However, optimal gain tuning is necessary to get a better tracking performance and reducing a chattering. This paper proposes a method that the Q-learning automatically tunes the control gains of SMCSPO with an iterative operation. In this tuning method, the rewards of reinforcement learning (RL) are set minus tracking errors of states, and the action of RL is a change of control gain to maximize rewards whenever the iteration number of movements increases. The simple motion test for a 7-DOF robot arm was simulated in MATLAB program to prove this RL tuning algorithm. The simulation showed that this method can automatically tune the control gains for SMCSPO.

Competitive Learning Neural Network with Binary Reinforcement and Constant Adaptation Gain (일정적응 이득과 이진 강화함수를 갖는 경쟁 학습 신경회로망)

  • Seok, Jin-Wuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.326-328
    • /
    • 1994
  • A modified Kohonen's simple Competitive Learning(SCL) algorithm which has binary reinforcement function and a constant adaptation gain is proposed. In contrast to the time-varing adaptation gain of the original Kohonen's SCL algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SCL due to the constant adaptation gain. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than one of the original SCL.

  • PDF

D.C. Motor Speed Control by Learning Gain Regulator (학습이득 조절기에 의한 직류 모터 속도제어)

  • Park, Wal-Seo;Lee, Sung-Su;Kim, Yong-Wook
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.19 no.6
    • /
    • pp.82-86
    • /
    • 2005
  • PID controller is widely used as automatic equipment for industry. However when a system has various characters of intermittence or continuance, a new parameter decision for accurate control is a bud task. As a method of solving this problem, in this paper, a teaming gain regulator as PID controller functions is presented. A propriety teaming gain of system is decided by a rule of Delta learning. The function of proposed loaming gain regulator is verified by simulation results of DC motor.

UAS Automatic Control Parameter Tuning System using Machine Learning Module (기계학습 알고리즘을 이용한 UAS 제어계수 실시간 자동 조정 시스템)

  • Moon, Mi-Sun;Song, Kang;Song, Dong-Ho
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.6
    • /
    • pp.874-881
    • /
    • 2010
  • A automatic flight control system(AFCS) of UAS needs to control its flight path along target path exactly as adjusts flight coefficient itself depending on static or dynamic changes of airplane's features such as type, size or weight. In this paper, we propose system which tunes control gain autonomously depending on change of airplane's feature in flight as adding MLM(Machine Learning Module) on AFCS. MLM is designed with Linear Regression algorithm and Reinforcement Learning and it includes EvM(Evaluation Module) which evaluates learned control gain from MLM and verified system. This system is tested on beaver FDC simulator and we present its analysed result.

A general dynamic iterative learning control scheme with high-gain feedback

  • Kuc, Tae-Yong;Nam, Kwanghee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.1140-1145
    • /
    • 1989
  • A general dynamic iterative learning control scheme is proposed for a class of nonlinear systems. Relying on stabilizing high-gain feedback loop, it is possible to show the existence of Cauchy sequence of feedforward control input error with iteration numbers, which results in a uniform convergance of system state trajectory to the desired one.

  • PDF

Fuzzy Gain Scheduling of Velocity PI Controller with Intelligent Learning Algorithm for Reactor Control

  • Kim, Dong-Yun;Seong, Poong-Hyun
    • Proceedings of the Korean Nuclear Society Conference
    • /
    • 1996.11a
    • /
    • pp.73-78
    • /
    • 1996
  • In this study, we proposed a fuzzy gain scheduler with intelligent learning algorithm for a reactor control. In the proposed algorithm, we used the gradient descent method to learn the rule bases of a fuzzy algorithm. These rule bases are learned toward minimizing an objective function, which is called a performance cost function. The objective of fuzzy gain scheduler with intelligent learning algorithm is the generation of adequate gains, which minimize the error of system. The condition of every plant is generally changed as time gose. That is, the initial gains obtained through the analysis of system are no longer suitable for the changed plant. And we need to set new gains, which minimize the error stemmed from changing the condition of a plant. In this paper, we applied this strategy for reactor control of nuclear power plant (NPP), and the results were compared with those of a simple PI controller, which has fixed gains. As a result, it was shown that the proposed algorithm was superior to the simple PI controller.

  • PDF

Optimal Condition Gain Estimation of PID Controller using Neural Networks (신경망을 이용한 PID 제어기의 제어 사양 최적의 이득값 추정)

  • Son, Jun-Hyeok;Seo, Bo-Hyeok
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.717-719
    • /
    • 2003
  • Recently Neural Network techniques have widely used in adaptive and learning control schemes for production systems. However, generally it costs a lot of time for learning in the case applied in control system. Furthermore, the physical meaning of neural networks constructed as a result is not obvious. And in practice since it is difficult to the PID gains suitably lots of researches have been reported with respect to turning schemes of PID gains. A Neural Network-based PID control scheme is proposed, which extracts skills of human experts as PID gains. This controller is designed by using three-layered neural networks. The effectiveness of the proposed Neural Network-based PID control scheme is investigated through an application for a production control system. This control method can enable a plant to operate smoothy and obviously as the plant condition varies with any unexpected accident.

  • PDF

A Neurofuzzy Algorithm-Based Advanced Bilateral Controller for Telerobot Systems

  • Cha, Dong-hyuk;Cho, Hyung-Suck
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.100-107
    • /
    • 2002
  • The advanced bilateral control algorithm, which can enlarge a reflected force by combining force reflection and compliance control, greatly enhances workability in teleoperation. In this scheme the maximum boundaries of a compliance controller and a force reflection gain guaranteeing stability and good task performance greatly depend upon characteristics of a slave arm, a master arm, and an environment. These characteristics, however, are generally unknown in teleoperation. It is, therefore, very difficult to determine such maximum boundary of the gain. The paper presented a novel method for design of an advanced bilateral controller. The factors affecting task performance and stability in the advanced bilateral controller were analyzed and a design guideline was presented. The neurofuzzy compliance model (NFCM)-based bilateral control proposed herein is an algorithm designed to automatically determine the suitable compliance for a given task or environment. The NFCM, composed of a fuzzy logic controller (FLC) and a rule-learning mechanism, is used as a compliance controller. The FLC generates compliant motions according to contact forces. The rule-learning mechanism, which is based upon the reinforcement learning algorithm, trains the rule-base of the FLC until the given task is done successfully. Since the scheme allows the use of large force reflection gain, it can assure good task performance. Moreover, the scheme does not require any priori knowledge on a slave arm dynamics, a slave arm controller and an environment, and thus, it can be easily applied to the control of any telerobot systems. Through a series of experiments effectiveness of the proposed algorithm has been verified.

Real-Time Control of DC Sevo Motor with Variable Load Using PID-Learning Controller (PID 학습제어기를 이용한 가변부하 직류서보전동기의 실시간 제어)

  • Kim, Sang-Hoon;Chung, In-Suk;Kang, Young-Ho;Nam, Moon-Hyon;Kim, Lark-Kyo
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.3
    • /
    • pp.107-113
    • /
    • 2001
  • This paper deals with speed control of DC servo motor using a PID controller with a gain tuning based on a Back-Propagation(BP) Learning Algorithm. Conventionally a PID controller has been used in the industrial control. But a PID controller should produce suitable parameters for each system. Also, variables of the PID controller should be changed according to environments, disturbances and loads. In this paper described by a experiment that contained a method using a PID controller with a gain tuning based on a Back-Propagation(BP) Learning Algorithm, we developed speed characteristics of a DC servo motor on variable loads. The parameters of the controller are determined by neural network performed on on-line system after training the neural network on off-line system.

  • PDF