• 제목/요약/키워드: LQ control

검색결과 233건 처리시간 0.039초

A controller design using modal decomposition of matrix pencil

  • Shibasato, Koki;Shiotsuki, Tetsuo;Kawaji, Shigeyasu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.492-492
    • /
    • 2000
  • This paper proposes LQ optimal controller design method based on the modal decomposition. Here, the design problem of linear time-invariant systems is considered by using pencil model. The mathematical model based on matrix pencil is one of the most general representation of the system. By adding some conditions the model can be reduced to traditional system models. In pencil model, the state feedback is considered as an algebraic constraint between the state variable and the control input variable. The algebraic constraint on pencil model is called purely static mode, and is included in infinite mode. Therefore, the information of the constant gain controller is included in the purely static mode of the augmented system which consists of the plant and the control conditions. We pay attention to the coordinate transformation matrix, and LQ optimal controller is derived from the algebraic constraint of the internal variable. The proposed method is applied to the numerical examples, and the results are verified.

  • PDF

도립형 로봇의 강건한 인간추적을 위한 선형화 모델기반 LQ제어 (LQ control by linear model of Inverted Pendulum Robot for Robust Human Tracking)

  • 진태석
    • 한국산업융합학회 논문집
    • /
    • 제23권1호
    • /
    • pp.49-55
    • /
    • 2020
  • This paper presents the system modeling, analysis, and controller design and implementation with a inverted pendulum system in order to test Linear Quadratic control based robust algorithm for inverted pendulum robot. The balancing of an inverted pendulum robot by moving pendulum robot like as 'segway' along a horizontal track is a classic problem in the area of control. This paper will describe two methods to swing a pendulum attached to a cart from an initial downwards position to an upright position and maintain that state. The results of real experiment show that the proposed control system has superior performance for following a reference command at certain initial conditions.

섭동계의 강인한 제어기 설계와 흡인형 자기부상계 제어 (Robust Controller Design for Perturbed Systems and Control of an Attractive Type Magnetic Levitation System)

  • 김상봉;김환성;정남수
    • 대한기계학회논문집
    • /
    • 제16권2호
    • /
    • pp.226-235
    • /
    • 1992
  • This paper is concerned with the robust control of LQ state feedback regulators with poles in a specified region in the presence of system uncertainty. The robust stability results for the constant and nonlinear time varying perturbations are derived in terms of bounds of the perturbed system matrices and the weighting matrices in the performance index of LQ problem. The theoretical results are applied to the gap control problem of an attractive-type-magnetic levitation system and the effectiveness is proved by the implementation of digital control using 16 bits microcomputer.

정상상태 추적편차를 고려한 가중행렬의 선택 (A method for deciding weighting matrices by considering a steady-state deviation in a LQ tracking problem)

  • 이진익;전기준
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1989년도 한국자동제어학술회의논문집; Seoul, Korea; 27-28 Oct. 1989
    • /
    • pp.473-476
    • /
    • 1989
  • Quadratic weighting matrices have an effect on the transition and steady state responses in a LQ tracking problem. They are usually decided on trial and error in order to get a good response. In this paper a method is presented which calculates a steady - state deviation without solving Riccati equation. By using this method, a new procedure for selecting the weighting matrices is proposed when a tolerance on the steady - state deviation is given.

  • PDF

이산 시간 스위칭 선형 시스템의 적응 LQ 준최적 제어를 위한 Q-학습법 (Q-learning for Adaptive LQ Suboptimal Control of Discrete-time Switched Linear System)

  • 전태윤;최윤호;박진배
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2011년도 제42회 하계학술대회
    • /
    • pp.1874-1875
    • /
    • 2011
  • 본 논문에서는 스위칭 선형 시스템의 적응 LQ 준최적 제어를 위한 Q-학습법 알고리즘을 제안한다. 제안된 제어 알고리즘은 안정성이 증명된 기존 Q-학습법에 기반하며 스위칭 시스템 모델의 변수를 모르는 상황에서도 준최적 제어가 가능하다. 이 알고리즘을 기반으로 기존에 스위칭 시스템에서 고려하지 않았던 각 시스템의 불확실성 및 최적 적응 제어 문제를 해결하고 컴퓨터 모의실험을 통해 제안한 알고리즘의 성능과 결과를 검증한다.

  • PDF

LQ제어 기법을 활용한 자기부상열차 부상제어기 설계에 관한 연구 (Study of Design for Maglev Levitation Controller based on LQ theory)

  • 이남진;한형석;양방섭;김철근
    • 한국철도학회:학술대회논문집
    • /
    • 한국철도학회 2007년도 추계학술대회 논문집
    • /
    • pp.865-871
    • /
    • 2007
  • The levitation system of Maglev is composed with electro-magnet, power supplier, controller and sensor. The complex interactions between above subcomponents define the characteristics of electromagnetic suspension of the vehicle. In this study, to understand the influence of controller on the running performance of Maglev, the new controller based on LQ theory will be designed and be simulated with simplified vehicle model. Then the influence of controller on the characteristics of electromagnetic suspension will be reviewed through comparison with existing control algorithm of our prototype vehicle.

  • PDF

LQ 조절기의 안정도 영역에 관한 연구 : 시간 영역에서의 해석 (A Study on the Stability Magin of the LQ Regulator : Time Domain Analysis)

  • 김상우;권욱현;이상정
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1987년도 한국자동제어학술회의논문집; 한국과학기술대학, 충남; 16-17 Oct. 1987
    • /
    • pp.125-129
    • /
    • 1987
  • The stability margin of the LQ regulator is investigated in the time domain. it is shown that the same guaranteed gain margin as that of the frequency domain analysis can be obtained with simple assumptions for the continuous time systems. It is also shown that the allowable modelling error bound can be expressed in terms of system matrices and Riccati equation solution. Guaranteed qain. margin and the allowable modelling error bound for the discrete time systems are also obtained by the similar procedures. In this case, through the some examples, the gain margin is shown to be less conservative than the frequency domain analysis result.

  • PDF

Labeling Q-Learning for Maze Problems with Partially Observable States

  • Lee, Hae-Yeon;Hiroyuki Kamaya;Kenich Abe
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.489-489
    • /
    • 2000
  • Recently, Reinforcement Learning(RL) methods have been used far teaming problems in Partially Observable Markov Decision Process(POMDP) environments. Conventional RL-methods, however, have limited applicability to POMDP To overcome the partial observability, several algorithms were proposed [5], [7]. The aim of this paper is to extend our previous algorithm for POMDP, called Labeling Q-learning(LQ-learning), which reinforces incomplete information of perception with labeling. Namely, in the LQ-learning, the agent percepts the current states by pair of observation and its label, and the agent can distinguish states, which look as same, more exactly. Labeling is carried out by a hash-like function, which we call Labeling Function(LF). Numerous labeling functions can be considered, but in this paper, we will introduce several labeling functions based on only 2 or 3 immediate past sequential observations. We introduce the basic idea of LQ-learning briefly, apply it to maze problems, simple POMDP environments, and show its availability with empirical results, look better than conventional RL algorithms.

  • PDF

Fuzzy Modeling and Control of Wheeled Mobile Robot

  • Kang, Jin-Shik
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제3권1호
    • /
    • pp.58-65
    • /
    • 2003
  • In this paper, a new model, which is a Takagi-Sugeno fuzzy model, for mobile robot is presented. A controller, consisting of two loops the one of which is the inner state feedback loop designed for stability and the outer loop is a PI controller designed for tracking the reference input, is suggested. Because the robot dynamics is nonlinear, it requires the controller to be insensitive to the nonlinear term. To achieve this objective, the model is developed by well known T-S fuzzy model. The design algorithm of inner state-feedback loop is regional pole-placement. In this paper, regions, for which poles of the inner state feedback loop are lie in, are formulated by LMI's. By solving these LMI's, we can obtain the state feedback gains for T-S fuzzy system. And this paper shows that the PI controller is equivalent to the state feedback and the cost function for reference tracking is equivalent to the LQ(linear quadratic) cost. By using these properties, it is also shown in this paper that the PI controller can be obtained by solving the LQ problem.

Flexible Labeling Mechanism in LQ-learning for Maze Problems

  • Lee, Haeyeon;Hiroyuki Kamaya;Kenichi Abe;Hiroyuki Kamaya
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.22.2-22
    • /
    • 2001
  • Recently, Reinforcement Learning (RL) methods in MDP have been extended and applied to the POMDP problems. Currently, hierarchical RL methods are widely studied. However, they have the drawback that the learning time and memories are exhausted only for keeping the hierarchical structure, though they aren´t necessary. On the other hand, our "Labeling Q-learning (LQ-learning) proposed previously, has no hierarchical structure, but adopts a characteristic internal memory mechanism. Namely, LQ-1earning agent percepts the state by pair of observation and its label, and the agent can distinguish states, which look as same, but obviously different, more exactly. So to speak, at each step t, we define a new type of perception of its environment ~ot = (ot, $\theta$t), where of is conventional observation, and $\theta$t is the label attached to the observation. Then the conventional ...

  • PDF