• Title/Summary/Keyword: Dynamic Learning Control

Search Result 353, Processing Time 0.032 seconds

Adaptive Control of Non-linearity Dynamic System using DNU (DNU에 의한 비선형 동적시스템의 적응제어)

  • Cho, Hyeon-Seob;Kim, Hee-Sook
    • Proceedings of the KIEE Conference
    • /
    • 1998.11b
    • /
    • pp.533-536
    • /
    • 1998
  • The intent of this paper is to describe a neural network structure called dynamic neural processor(DNP), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the DNP, are described. Computer simulations are provided to demonstrate the effectiveness of the proposed learning using the DNP.

  • PDF

Design of Multi-Dynamic Neural Network Controller using Nonlinear Control Systems (비선형 제어 시스템을 이용한 다단동적 신경망 제어기 설계)

  • Rho, Yong-Gi;Kim, Won-Jung;Cho, Hynu-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2006.11a
    • /
    • pp.122-128
    • /
    • 2006
  • The intent of this paper is to describe a neural network structure called multi dynamic neural network(MDNN), and examine how it can be used in developing a learning scheme for computing robot inverse kinematic transformations. The architecture and learning algorithm of the proposed dynamic neural network structure, the MDNN, are described. Computer simulations are demonstrate the effectiveness of the proposed learning using the MDNN.

  • PDF

Development of a Neural-Fuzzy Control Algorithm for Dynamic Control of a Track Vehicle (궤도차량의 동적 제어를 위한 퍼지-뉴런 제어 알고리즘 개발)

  • 서운학
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1999.10a
    • /
    • pp.142-147
    • /
    • 1999
  • This paper presents a new approach to the dynamic control technique for track vehicle system using neural network-fuzzy control method. The proposed control scheme uses a Gaussian function as a unit function in the neural network-fuzzy, and back propagation algorithm to train the fuzzy-neural network controller in the framework of the specialized learning architecture. It is proposed a learning controller consisting of two neural network-fuzzy based on independent reasoning and a connection net with fixed weights to simply the neural networks-fuzzy. The performance of the proposed controller is shown by simulation for trajectory tracking of the speed and azimuth of a track vehicle.

  • PDF

Reinforcement Learning Control using Self-Organizing Map and Multi-layer Feed-Forward Neural Network

  • Lee, Jae-Kang;Kim, Il-Hwan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.142-145
    • /
    • 2003
  • Many control applications using Neural Network need a priori information about the objective system. But it is impossible to get exact information about the objective system in real world. To solve this problem, several control methods were proposed. Reinforcement learning control using neural network is one of them. Basically reinforcement learning control doesn't need a priori information of objective system. This method uses reinforcement signal from interaction of objective system and environment and observable states of objective system as input data. But many methods take too much time to apply to real-world. So we focus on faster learning to apply reinforcement learning control to real-world. Two data types are used for reinforcement learning. One is reinforcement signal data. It has only two fixed scalar values that are assigned for each success and fail state. The other is observable state data. There are infinitive states in real-world system. So the number of observable state data is also infinitive. This requires too much learning time for applying to real-world. So we try to reduce the number of observable states by classification of states with Self-Organizing Map. We also use neural dynamic programming for controller design. An inverted pendulum on the cart system is simulated. Failure signal is used for reinforcement signal. The failure signal occurs when the pendulum angle or cart position deviate from the defined control range. The control objective is to maintain the balanced pole and centered cart. And four states that is, position and velocity of cart, angle and angular velocity of pole are used for state signal. Learning controller is composed of serial connection of Self-Organizing Map and two Multi-layer Feed-Forward Neural Networks.

  • PDF

Dynamic GBFCM(Gradient Based FCM) Algorithm (동적 GBFCM(Gradient Based FCM) 알고리즘)

  • Kim, Myoung-Ho;Park, Dong-C.
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1371-1373
    • /
    • 1996
  • A clustering algorithms with dynamic adjustment of learning rate for GBFCM(Gradient Based FCM) is proposed in this paper. This algorithm combines two idea of dynamic K-means algorithms and GBFCM : learning rate variation with entropy concept and continuous membership grade. To evaluate dynamic GBFCM, we made comparisons with Kohonen's Self-Organizing Map over several tutorial examples and image compression. The results show that DGBFCM(Dynamic GBFCM) gives superior performance over Kohonen's algorithm in terms of signal-to-noise.

  • PDF

Dynamic System Identification Using a Recurrent Compensatory Fuzzy Neural Network

  • Lee, Chi-Yung;Lin, Cheng-Jian;Chen, Cheng-Hung;Chang, Chun-Lung
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.5
    • /
    • pp.755-766
    • /
    • 2008
  • This study presents a recurrent compensatory fuzzy neural network (RCFNN) for dynamic system identification. The proposed RCFNN uses a compensatory fuzzy reasoning method, and has feedback connections added to the rule layer of the RCFNN. The compensatory fuzzy reasoning method can make the fuzzy logic system more effective, and the additional feedback connections can solve temporal problems as well. Moreover, an online learning algorithm is demonstrated to automatically construct the RCFNN. The RCFNN initially contains no rules. The rules are created and adapted as online learning proceeds via simultaneous structure and parameter learning. Structure learning is based on the measure of degree and parameter learning is based on the gradient descent algorithm. The simulation results from identifying dynamic systems demonstrate that the convergence speed of the proposed method exceeds that of conventional methods. Moreover, the number of adjustable parameters of the proposed method is less than the other recurrent methods.

Implementation of an Intelligent Learning Controller for Gait Control of Biped Walking Robot (이족보행로봇의 걸음새 제어를 위한 지능형 학습 제어기의 구현)

  • Lim, Dong-Cheol;Kuc, Tae-Yong
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.59 no.1
    • /
    • pp.29-34
    • /
    • 2010
  • This paper presents an intelligent learning controller for repetitive walking motion of biped walking robot. The proposed learning controller consists of an iterative learning controller and a direct learning controller. In the iterative learning controller, the PID feedback controller takes part in stabilizing the learning control system while the feedforward learning controller plays a role in compensating for the nonlinearity of uncertain biped walking robot. In the direct learning controller, the desired learning input for new joint trajectories with different time scales from the learned ones is generated directly based on the previous learned input profiles obtained from the iterative learning process. The effectiveness and tracking performance of the proposed learning controller to biped robotic motion is shown by mathematical analysis and computer simulation with 12 DOF biped walking robot.

Reinforcement learning for multi mobile robot control in the dynamic environments (동적 환경에서 강화학습을 이용한 다중이동로봇의 제어)

  • 김도윤;정명진
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.944-947
    • /
    • 1996
  • Realization of autonomous agents that organize their own internal structure in order to behave adequately with respect to their goals and the world is the ultimate goal of AI and Robotics. Reinforcement learning gas recently been receiving increased attention as a method for robot learning with little or no a priori knowledge and higher capability of reactive and adaptive behaviors. In this paper, we present a method of reinforcement learning by which a multi robots learn to move to goal. The results of computer simulations are given.

  • PDF

DESIGN OF CONTROLLER FOR NONLINEAR SYSTEM USING DYNAMIC NEURAL METWORKS

  • Park, Seong-Wook;Seo, Bo-Hyeok
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.60-64
    • /
    • 1995
  • The conventional neural network models are a parody of biological neural structures, and have very slow learning. In order to emulate some dynamic functions, such as learning and adaption, and to better reflect the dynamics of biological neurons, M.M. Gupta and D.H. Rao have developed a 'dynamic neural model'(DNU). Proposed neural unit model is to introduce some dynamics to the neuron transfer function, such that the neuron activity depends on internal states. Integrating an dynamic elementry processor within the neuron allows the neuron to act dynamic response Numerical examples are presented for a model system. Those case studies showed that the proposed DNU is so useful in practical sense.

  • PDF

On learning control of robot manipulator including the bounded input torque (제한 입력을 고려한 로보트 매니플레이터의 학습제어에 관한 연구)

  • 성호진;조현찬;전홍태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10a
    • /
    • pp.58-62
    • /
    • 1988
  • Recently many adaptive control schemes for the industrial robot manipulator have been developed. Especially, learning control utilizing the repetitive motion of robot and based on iterative signal synthesis attracts much interests. However, since most of these approaches excludes the boundness of the input torque supplied to the manipulator, its effectiveness may be limited and also the full dynamic capacity of the robot manipulator can not be utilized. To overcome the above-mentioned difficulties and meet the desired performance, we propose an approach which yields the effective learning control schemes in this paper. In this study, some stability conditions derived from applying the Lyapunov theory to the discrete linear time-varying dynamic system are established and also an optimization scheme considering the bounded input torque is introduced. These results are simulated on a digital computer using a three-joint revolute manipulator to show their effectiveness.

  • PDF