• Title/Summary/Keyword: Dynamic Learning Control

Search Result 353, Processing Time 0.04 seconds

Neurocontrol architecture for the dynamic control of a robot arm (로보트 팔의 동력학적제어를 위한 신경제어구조)

  • 문영주;오세영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.280-285
    • /
    • 1991
  • Neural network control has many innovative potentials for fast, accurate and intelligent adaptive control. In this paper, a learning control architecture for the dynamic control of a robot manipulator is developed using inverse dynamic neurocontroller and linear neurocontroher. The inverse dynamic neurocontrouer consists of a MLP (multi-layer perceptron) and the linear neurocontroller consists of SLPs (single layer perceptron). Compared with the previous type of neurocontroller which is using an inverse dynamic neurocontroller and a fixed PD gain controller, proposed architecture shows the superior performance over the previous type of neurocontroller because linear neurocontroller can adapt its gain according to the applied task. This superior performance is tested and verified through the control of PUMA 560. Without any knowledge on the dynamic model, its parameters of a robot , (The robot is treated as a complete black box), the neurocontroller, through practice, gradually and implicitly learns the robot's dynamic properties which is essential for fast and accurate control.

  • PDF

Study of adaptive learning control for teleoperating system (Teleoperating system의 적응학습제어에 관한 연구)

  • 최병현;국태용;최혁렬
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.168-172
    • /
    • 1996
  • In master-slave teleoperating system, it is important that the system has good maneuverability. In this paper, it is addressed an adaptive learning control method applicable to the master-slave system. This control scheme has the ability to estimate uncertain dynamic parameters included intrinsically in the system and to achieve the desired performance without the nasty matrix operation. The proposed method is applied to a master-slave teleoperating system composed of two SCARA robots and verified experimentally.

  • PDF

A second-order iterative learning control method

  • Bien, Zeungnam;Huh, Kyung-Moo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.734-739
    • /
    • 1988
  • For the trajectory control of dynamic systems with unidentified parameters a second-order iterative learning control method is presented. In contrast to other known methods, the proposed learning control scheme can utilize more than one error history contained in the trajectories generated at prior iterations. A convergency proof is given and it is also shown that the convergence speed can be improved in compared to conventional methods. Examples are provided to show effectiveness of the algorithm, and, via simulation, it is demonstrated that the method yields a good performance even in the presence of distubances.

  • PDF

Design of an iterative learning controller for a class of linear dynamic systems with time-delay (시간 지연이 있는 선형 시스템에 대한 반복 학습 제어기의 설계)

  • Park, Kwang-Hyun;Bien, Zeung-Nam;Hwang, Dong-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.3
    • /
    • pp.295-300
    • /
    • 1998
  • In this paper, we point out the possibility of the divergence of control input caused by the estimation error of delay-time when general iterative learning algorithms are applied to a class of linear dynamic systems with time-delay in which delay-time is not exactly measurable, and then propose a new type of iterative learning algorithm in order to solve this problem. To resolve the uncertainty of delay-time, we propose an algorithm using holding mechanism which has been used in digital control system and/or discrete-time control system. The control input is held as constant value during the time interval of which size is that of the delay-time uncertainty. The output of the system tracks a given desired trajectory at discrete points which are spaced auording to the size of uncertainty of delay-time with the robust property for estimation error of delay-time. Several numerical examples are given to illustrate the effeciency of the proposed algorithm.

  • PDF

Barycentric Approximator for Reinforcement Learning Control

  • Whang Cho
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.3 no.1
    • /
    • pp.33-42
    • /
    • 2002
  • Recently, various experiments to apply reinforcement learning method to the self-learning intelligent control of continuous dynamic system have been reported in the machine learning related research community. The reports have produced mixed results of some successes and some failures, and show that the success of reinforcement learning method in application to the intelligent control of continuous control systems depends on the ability to combine proper function approximation method with temporal difference methods such as Q-learning and value iteration. One of the difficulties in using function approximation method in connection with temporal difference method is the absence of guarantee for the convergence of the algorithm. This paper provides a proof of convergence of a particular function approximation method based on \"barycentric interpolator\" which is known to be computationally more efficient than multilinear interpolation .

DYNAMIC ROUTE PLANNING BY Q-LEARNING -Cellular Automation Based Simulator and Control

  • Sano, Masaki;Jung, Si
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.24.2-24
    • /
    • 2001
  • In this paper, the authors present a row dynamic route planning by Q-learning. The proposed algorithm is executed in a cellular automation based traffic simulator, which is also newly created. In Vehicle Information and Communication System(VICS), which is an active field of Intelligent Transport System(ITS), information of traffic congestion is sent to each vehicle at real time. However, a centralized navigation system is not realistic to guide millions of vehicles in a megalopolis. Autonomous distributed systems should be more flexible and scalable, and also have a chance to focus on each vehicles demand. In such systems, each vehicle can search an own optimal route. We employ Q-learning of the reinforcement learning method to search an optimal or sub-optimal route, in which route drivers can avoid traffic congestions. We find some applications of the reinforcement learning in the "static" environment, but there are ...

  • PDF

Design of DNP Controller for Robust Control Auto-Systems (DNP에 의한 자동화 시스템의 강인제어기 설계)

  • 김종옥;조용민;민병조;송용화;조현섭
    • Proceedings of the Korean Institute of IIIuminating and Electrical Installation Engineers Conference
    • /
    • 1999.11a
    • /
    • pp.121-126
    • /
    • 1999
  • In this paper, to bring under robust and accurate control of auto-equipment systems which disturbance, parameter alteration of system, uncertainty and so forth exist, neural network controller called dynamic neural processor(DNP) is designed. In order to perform a elaborate task like as assembly, manufacturing and so forth of components, tracking control on the trajectory of power coming in contact with a target as well as tracking control on the movement course trajectory of end-effector is indispensable. Also, the learning architecture to compute inverse kinematic coordinates transformations in the manipulator of auto-equipment systems is developed and the example that DNP can be used is explained. The architecture and learning algorithm of the proposed dynamic neural network, the DNP, are described and computer simulations are provided to demonstrate the effectiveness of the proposed learning method using the DNP.

  • PDF

Design of DNP Controller for Robust Control of Auto-Equipment Systems (자동화 설비시스템의 강인제어를 위한 DNP 제어기 설계)

  • ;趙賢燮
    • The Proceedings of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.13 no.2
    • /
    • pp.187-187
    • /
    • 1999
  • in order to perform a elaborate task like as assembly, manufacturing and so forth of components, tracking control on the trajectory of power coming in contact with a target as well as tracking control on the movement course trajectory of end-effector is indispensable. In this paper, to bring under robust and accurate control of auto-equipment systems which disturbance, parameter alteration of system, uncertainty and so forth exist, neural network controller called dynamic neural processor(DNP) is designed. Also, the learning architecture to compute inverse kinematic coordinates transformations in the manipulator of auto-equipment system is developed and the example that DNP can be used is explained. The architecture and learning algorithm of the proposed dynamic neural network, the DNP, are described and computer simulation are provided to demonstrate the effectiveness of the proposed learning method using the DNP.

A Study on Design of Neuro- Fuzzy Controller for Attitude Control of Helicopter (헬리콥터 자세제어를 위한 뉴로 퍼지 제어기의 설계에 관한 연구)

  • Choi, Yong-Sun;Lim, Tae-Woo;Jang, Gung-Won;Ahn, Tae-Chon
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2283-2285
    • /
    • 2001
  • This paper proposed to a neural network based fuzzy control (neuro-fuzzy control) technique for attitude control of helicopter with strongly dynamic nonlinearities and derived a helicopter aerodynamic torque equation of helicopter and the force balance equation. A neuro-fuzzy system is a feedforward network that employs a back-propagation algorithm for learning purpose. A neuro-fuzzy system is used to identify nonlinear dynamic systems. Hence, this paper presents methods for the design of a neural network(NN) based fuzzy controller(that is, neuro-fuzzy control) for a helicopter of nonlinear MIMO systems. The proposed neuro-fuzzy control determined to a input-output membership function in fuzzy control and neural networks constructed to improve through learning of input-output membership functions determined in fuzzy control.

  • PDF

A Study on Intelligent Control of Real-Time Working Motion Generation of Bipped Robot (2족 보행로봇의 실시간 작업동작 생성을 위한 지능제어에 관한 연구)

  • Kim, Min-Seong;Jo, Sang-Young;Koo, Young-Mok;Jeong, Yang-Gun;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.19 no.1
    • /
    • pp.1-9
    • /
    • 2016
  • In this paper, we propose a new learning control scheme for various walk motion control of biped robot with same learning-base by neural network. We show that learning control algorithm based on the neural network is significantly more attractive intelligent controller design than previous traditional forms of control systems. A multi layer back propagation neural network identification is simulated to obtain a dynamic model of biped robot. Once the neural network has learned, the other neural network control is designed for various trajectory tracking control with same learning-base. The biped robots have been received increased attention due to several properties such as its human like mobility and the high-order dynamic equation. These properties enable the biped robots to perform the dangerous works instead of human beings. Thus, the stable walking control of the biped robots is a fundamentally hot issue and has been studied by many researchers. However, legged locomotion, it is difficult to control the biped robots. Besides, unlike the robot manipulator, the biped robot has an uncontrollable degree of freedom playing a dominant role for the stability of their locomotion in the biped robot dynamics. From the simulation and experiments the reliability of iterative learning control was illustrated.