• Title/Summary/Keyword: adaptive repetitive control

Search Result 25, Processing Time 0.017 seconds

Error elimination for systems with periodic disturbances using adaptive neural-network technique (주기적 외란을 수반하는 시스템의 적응 신경망 회로 기법에 의한 오차 제거)

  • Kim, Han-Joong;Park, Jong-Koo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.8
    • /
    • pp.898-906
    • /
    • 1999
  • A control structure is introduced for the purpose of rejecting periodic (or repetitive) disturbances on a tracking system. The objective of the proposed structure is to drive the output of the system to the reference input that will result in perfect following without any changing the inner configuration of the system. The structure includes an adaptation block which learns the dynamics of the periodic disturbance and forces the interferences, caused by disturbances, on the output of the system to be reduced. Since the control structure acquires the dynamics of the disturbance by on-line adaptation, it is possible to generate control signals that reject any slowly varying time-periodic disturbance provided that its amplitude is bounded. The artificial neural network is adopted as the adaptation block. The adaptation is done at an on-line process. For this , the real-time recurrent learning (RTRL) algoritnm is applied to the training of the artificial neural network.

  • PDF

On learning control of robot manipulator including the bounded input torque (제한 입력을 고려한 로보트 매니플레이터의 학습제어에 관한 연구)

  • 성호진;조현찬;전홍태
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10a
    • /
    • pp.58-62
    • /
    • 1988
  • Recently many adaptive control schemes for the industrial robot manipulator have been developed. Especially, learning control utilizing the repetitive motion of robot and based on iterative signal synthesis attracts much interests. However, since most of these approaches excludes the boundness of the input torque supplied to the manipulator, its effectiveness may be limited and also the full dynamic capacity of the robot manipulator can not be utilized. To overcome the above-mentioned difficulties and meet the desired performance, we propose an approach which yields the effective learning control schemes in this paper. In this study, some stability conditions derived from applying the Lyapunov theory to the discrete linear time-varying dynamic system are established and also an optimization scheme considering the bounded input torque is introduced. These results are simulated on a digital computer using a three-joint revolute manipulator to show their effectiveness.

  • PDF

Stabilization Position Control of a Ball-Beam System Using Neural Networks Controller (신경회로망 제어기을 이용한 볼-빔 시스템의 안정화 위치제어)

  • 탁한호;추연규
    • Journal of the Korean Institute of Navigation
    • /
    • v.23 no.3
    • /
    • pp.35-44
    • /
    • 1999
  • This research aims to seek active control of ball-beam position stability by resorting to neural networks whose layers are given bias weights. The controller consists of an LQR (linear quadratic regulator) controller and a neural networks controller in parallel. The latter is used to improve the responses of the established LQR control system, especially when controlling the system with nonlinear factors or modelling errors. For the learning of this control system, the feedback-error learning algorithm is utilized here. While the neural networks controller learns repetitive trajectories on line, feedback errors are back-propagated through neural networks. Convergence is made when the neural networks controller reversely learns and controls the plant. The goals of teaming are to expand the working range of the adaptive control system and to bridge errors owing to nonlinearity by adjusting parameters against the external disturbances and change of the nonlinear plant. The motion equation of the ball-beam system is derived from Newton's law. As the system is strongly nonlinear, lots of researchers have depended on classical systems to control it. Its applications of position control are seen in planes, ships, automobiles and so on. However, the research based on artificial control is quite recent. The current paper compares and analyzes simulation results by way of the LQR controller and the neural network controller in order to prove the efficiency of the neural networks control algorithm against any nonlinear system.

  • PDF

Indirect Decentralized Learning Control for the Multiple Systems (복합시스템을 위한 간접분산학습제어)

  • Lee, Soo-Cheol
    • Proceedings of the Korea Association of Information Systems Conference
    • /
    • 1996.11a
    • /
    • pp.217-227
    • /
    • 1996
  • The new field of learning control develops controllers that learn to improve their performance at executing a given task, based on experience performin this specific task. In a previous work[6], the authors presented a theory of indirect learning control based on use of indirect adaptive control concepts employing simultaneous identification ad control. This paper develops improved indirect learning control algorithms, and studies the use of such controllers in decentralized systems. The original motivation of the learning control field was learning in robots doing repetitive tasks such as on an assembly line. This paper starts with decentralized discrete time systems, and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. The basic result of the paper is to show that stability of the indirect learning controllers for all subsystems when the coupling between subsystems is turned off, assures convergence to zero tracking error of the decentralized indirect learning control of the coupled system, provided that the sample time in the digital learning controller is sufficiently short.

  • PDF

Indirect Decentralized Learning Control for the Multiple Systems (복합시스템을 위한 간접분산학습제어)

  • Lee, Soo-Cheol
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 1996.10a
    • /
    • pp.217-227
    • /
    • 1996
  • The new filed of learning control develops controllers that learn to improve their performance at executing a given task , based on experience performing this specific task. In a previous work[6], authors presented a theory of indirect learning control based on use of indirect adaptive control concepts employing simultaneous identification and control. This paper develops improved indirect learning control algorithms, and studies the use of such controller indecentralized systems. The original motivation of the learning control field was learning in robots doing repetitive tasks such as on an asssembly line. This paper starts with decentralized discrete time systems. and progresses to the robot application, modeling the robot as a time varying linear system in the neighborhood of the nominal trajectory, and using the usual robot controllers that are decentralized, treating each link as if it is independent of any coupling with other links. The resultof the paper is to show that stability of the indirect learning controllers for all subsystems when the coupling between subsystems is turned off, assures convergence to zero tracking error of the decentralized indirect learning control of the coupled system, provided that the sample tie in the digital learning controller is sufficiently short.