• Title/Summary/Keyword: Learning control

Search Result 3,736, Processing Time 0.058 seconds

A Simple Learning Variable Structure Control Law for Rigid Robot Manipulators

  • Choi, Han-Ho;Kuc, Tae-Yong;Lee, Dong-Hun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.354-359
    • /
    • 2003
  • In this paper, we consider the problem of designing a simple learning variable structure system for repeatable tracking control of robot manipulators. We combine a variable structure control law as the robust part for stabilization and a feedforward learning law as the intelligent part for nonlinearity compensation. We show that the tracking error asymptotically converges to zero. Finally, we give computer simulation results in order to show the effectiveness of our method.

  • PDF

Control of a batch reactor using iterative learning (반복학습을 이용한 회분식 반응기의 제어)

  • 조문기;방성호;조진원;이광순
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.81-86
    • /
    • 1991
  • The iterative learning operation has been utilized in the temperature Control of a batch reactor. A generic form of feedback-assisted first-order learning control scheme was constructed and then various design and operation modes were derived through convergence and robustness analysis in the frequency domain. The proposed learning control scheme was then implemented on a bench scale batch reactor with the heat of reaction simulated by an electric heater. The results show a great improvement in the performance of control as the number of batch operations progressed.

  • PDF

Control of a batch reactor by learning operation

  • Lee, Kwang-Soon;Cho, Moon-Khi;Cho, Jin-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.1277-1283
    • /
    • 1990
  • The iterative learning control synthesized in the frequency domain has been utilized for temperature control of a batch reactor. For this purpose, a feedback-assisted generalized learning control scheme was constructed first, and the convergence and robustness analyses were conducted in the frequency domain. The feedback-assisted learning operation was then implemented in a bench scale batch reactor where reaction heat is simulated using an electric heater. As a result, progressive reduction of temperature control error could be obviously observed as batch operation is repeated.

  • PDF

Multi-Dimensional Reinforcement Learning Using a Vector Q-Net - Application to Mobile Robots

  • Kiguchi, Kazuo;Nanayakkara, Thrishantha;Watanabe, Keigo;Fukuda, Toshio
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.142-148
    • /
    • 2003
  • Reinforcement learning is considered as an important tool for robotic learning in unknown/uncertain environments. In this paper, we propose an evaluation function expressed in a vector form to realize multi-dimensional reinforcement learning. The novel feature of the proposed method is that learning one behavior induces parallel learning of other behaviors though the objectives of each behavior are different. In brief, all behaviors watch other behaviors from a critical point of view. Therefore, in the proposed method, there is cross-criticism and parallel learning that make the multi-dimensional learning process more efficient. By ap-plying the proposed learning method, we carried out multi-dimensional evaluation (reward) and multi-dimensional learning simultaneously in one trial. A special neural network (Q-net), in which the weights and the output are represented by vectors, is proposed to realize a critic net-work for Q-learning. The proposed learning method is applied for behavior planning of mobile robots.

Evolutionary Learning-Rate Selection for BPNN with Window Control Scheme

  • Hoon, Jung-Sung
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.10a
    • /
    • pp.301-308
    • /
    • 1997
  • The learning speed of the neural networks, the most important factor in applying to real problems, greatly depends on the learning rate of the networks, Three approaches-empirical, deterministic, and stochastic ones-have been proposed to date. We proposed a new learning-rate selection algorithm using an evolutionary programming search scheme. Even though the performance of our method showed better than those of the other methods, it was found that taking much time for selecting evolutionary learning rates made the performance of our method degrade. This was caused by using static intervals (called static windows) in order to update learning rates. Out algorithm with static windows updated the learning rates showed good performance or didn't update the learning rates even though previously updated learning rates shoved bad performance. This paper introduce a window control scheme to avoid such problems. With the window control scheme, our algorithm try to update the learning ra es only when the learning performance is continuously bad during a specified interval. If previously selected learning rates show good performance, new algorithm will not update the learning rates. This diminish the updating time of learning rates greatly. As a result, our algorithm with the window control scheme show better performance than that with static windows. In this paper, we will describe the previous and new algorithm and experimental results.

  • PDF

학습제어기를 이용한 직류전동기제어

  • 홍기철;남광희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.402-406
    • /
    • 1989
  • Since the control parameters of classical PID controller are fixed for all control period, it is not easy to produce a desired transition phenomena. We incorporate an iterative learning scheme to the linear controller so that it has more flexibility and adaptation capability especially in the transition period. In this paper a hybrid type learning controller is proposed in which fixed linear controller guides learning at the beginning stage. Once a perfect learning is achieved, then the control action is performed by only the learning controller. A computer simulation result demonstrates better performance during transition time than that with only linear PD controller.

  • PDF

Robot learning control with fast convergence (빠른 수렴성을 갖는 로보트 학습제어)

  • 양원영;홍호선
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10a
    • /
    • pp.67-71
    • /
    • 1988
  • We present an algorithm that uses trajectory following errors to improve a feedforward command to a robot in the iterative manner. It has been shown that when the manipulator handles an unknown object, the P-type learning algorithm can make the trajectory converge to a desired path and also that the proposed learning control algorithm performs better than the other type learning control algorithm. A numerical simulation of a three degree of freedom manipulator such as PUMA-560 ROBOT has been performed to illustrate the effectiveness of the proposed learning algorithm.

  • PDF

Multiple Reward Reinforcement learning control of a mobile robot in home network environment

  • Kang, Dong-Oh;Lee, Jeun-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1300-1304
    • /
    • 2003
  • The following paper deals with a control problem of a mobile robot in home network environment. The home network causes the mobile robot to communicate with sensors to get the sensor measurements and to be adapted to the environment changes. To get the improved performance of control of a mobile robot in spite of the change in home network environment, we use the fuzzy inference system with multiple reward reinforcement learning. The multiple reward reinforcement learning enables the mobile robot to consider the multiple control objectives and adapt itself to the change in home network environment. Multiple reward fuzzy Q-learning method is proposed for the multiple reward reinforcement learning. Multiple Q-values are considered and max-min optimization is applied to get the improved fuzzy rule. To show the effectiveness of the proposed method, some simulation results are given, which are performed in home network environment, i.e., LAN, wireless LAN, etc.

  • PDF

A Learning Controller for Gate Control of Biped Walking Robot using Fourier Series Approximation

  • Lim, Dong-cheol;Kuc, Tae-yong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.85.4-85
    • /
    • 2001
  • A learning controller is presented for repetitive walking motion of biped robot. The learning control scheme learns the approximate inverse dynamics input of biped walking robot and uses the learned input pattern to generate an input profile of different walking motion from that learnt. In the learning controller, the PID feedback controller takes part in stabilizing the transient response of robot dynamics while the feedforward learning controller plays a role in computing the desired actuator torques for feedforward nonlinear dynamics compensation in steady state. It is shown that all the error signals in the learning control system are bounded and the robot motion trajectory converges to the desired one asymptotically. The proposed learning control scheme is ...

  • PDF

A Study on Implementation of a Real Time Learning Controller for Direct Drive Manipulator (직접 구동형 매니퓰레이터를 위한 학습 제어기의 실시간 구현에 관한 연구)

  • Jeon, Jong-Wook;An, Hyun-Sik;Lim, Mee-Seub;Kim, Kwon-Ho;Kim, Kwang-Bae;Lee, Kwae-Hi
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.369-372
    • /
    • 1993
  • In this thesis, we consider an iterative learning controller to control the continuous trajectory of 2 links direct drive robot manipulator and process computer simulation and real-time experiment. To improve control performance, we adapt an iterative learning control algorithm, drive a sufficient condition for convergence from which is drived extended conventional control algorithm and get better performance by extended learning control algorithm than that by conventional algorithm from simulation results. Also, experimental results show that better performance is taken by extended learning algorithm.

  • PDF