• Title/Summary/Keyword: Control equation

Search Result 2,846, Processing Time 0.035 seconds

Computational Solution of a H-J-B equation arising from Stochastic Optimal Control Problem

  • Park, Wan-Sik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.440-444
    • /
    • 1998
  • In this paper, we consider numerical solution of a H-J-B (Hamilton-Jacobi-Bellman) equation of elliptic type arising from the stochastic control problem. For the numerical solution of the equation, we take an approach involving contraction mapping and finite difference approximation. We choose the It(equation omitted) type stochastic differential equation as the dynamic system concerned. The numerical method of solution is validated computationally by using the constructed test case. Map of optimal controls is obtained through the numerical solution process of the equation. We also show how the method applies by taking a simple example of nonlinear spacecraft control.

  • PDF

Attitude Maneuver Control of Flexible Spacecraft by Observer-based Tracking Control

  • Hyochoong Bang;Oh, Choong-Seok
    • Journal of Mechanical Science and Technology
    • /
    • v.18 no.1
    • /
    • pp.122-131
    • /
    • 2004
  • A constraint equation-based control law design for large angle attitude maneuvers of flexible spacecraft is addressed in this paper The tip displacement of the flexible spacecraft model is prescribed in the form of a constraint equation. The controller design is attempted in the way that the constraint equation is satisfied throughout the maneuver. The constraint equation leads to a two-point boundary value problem which needs backward and forward solution techniques to satisfy terminal constraints. An observer-based tracking control law takes the constraint equation as the input to the dynamic observer. The observer state is used in conjunction with the state feedback control law to have the actual system follow the observer dynamics. The observer-based tracking control law eventually turns into a stabilized system with inherent nature of robustness and disturbance rejection in LQR type control laws.

OPTIMAL CONTROL OF THE VISCOUS WEAKLY DISPERSIVE BENJAMIN-BONA-MAHONY EQUATION

  • ZHANG, LEI;LIU, BIN
    • Bulletin of the Korean Mathematical Society
    • /
    • v.52 no.4
    • /
    • pp.1185-1199
    • /
    • 2015
  • This paper is concerned with the optimal control problem for the viscous weakly dispersive Benjamin-Bona-Mahony (BBM) equation. We prove the existence and uniqueness of weak solution to the equation. The optimal control problem for the viscous weakly dispersive BBM equation is introduced, and then the existence of optimal control to the problem is proved.

Speed control of a induction motor system using digital control method (유도전동기의 디지탈 속도 제어)

  • 이충환;김상봉;하주식
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.987-992
    • /
    • 1992
  • In recent years, induction motor is applied for several industrial actuatung parts instead of direct current motor because of the robust construction, nonexpensive and maintenance-free actuator etc. and having capability of speed control according to development of power electrounics and microprocessor techniques. In the paper, a microprocessor-based digital control approach for spped control of induction motor system is presented by considering a simple modelling equation as the system expression equation of induction motor and using the self tuning control and torque effdforward control method. As the model equation of the induction motor system, we use a second order differential equation which is well known in the modeling equation is induced form the control theory stand point such tath we can describe usually the motor system connected by inverter, generator and load etc. The effectiveness of the control system composed by the above mentioned design concept is illustrated by the expermental result in the presence of step reference change and generator load variation.

  • PDF

A New Linearized Equation for Modelling a Servovalve in Hydraulic Control Systems (유압 제어계에서 서보밸브 모델링을 위한 새로운 선형화 방정식의 제안)

  • Kim, Tae-Hyung;Lee, Ill-Young
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.27 no.5
    • /
    • pp.789-797
    • /
    • 2003
  • In the procedure of the hydraulic control system design, a linearized approximate equation described by the first order terms of Taylor series has been widely used. Such a linearized equation is effective just near the operating point, However, pressure and flowrate in actual hydraulic systems are usually not confined near an operating point. This study suggests a new linearized flow equation for a servovalve as a modified form of the conventional linearized flow equation. Subsequently, a procedure to determine effective operating point for the new linearized equation is proposed. From the evaluations of time responses and frequency responses obtained from simulations for a hydraulic control system, the effectiveness of the new linearized equation and the procedure to determine effective operating point is confirmed.

RICCATI EQUATION IN QUADRATIC OPTIMAL CONTROL PROBLEM OF DAMPED SECOND ORDER SYSTEM

  • Ha, Junhong;Nakagiri, Shin-Ichi
    • Journal of the Korean Mathematical Society
    • /
    • v.50 no.1
    • /
    • pp.173-187
    • /
    • 2013
  • This paper studies the properties of solutions of the Riccati equation arising from the quadratic optimal control problem of the general damped second order system. Using the semigroup theory, we establish the weak differential characterization of the Riccati equation for a general class of the second order distributed systems with arbitrary damping terms.

AN OPTIMAL CONTROL FOR THE WAVE EQUATION WITH A LOCALIZED NONLINEAR DISSIPATION

  • Kang, Yong-Han
    • East Asian mathematical journal
    • /
    • v.22 no.2
    • /
    • pp.171-188
    • /
    • 2006
  • We consider the problem of an optimal control of the wave equation with a localized nonlinear dissipation. An optimal control is used to bring the state solutions close to a desired profile under a quadratic cost of control. We establish the existence of solutions of the underlying initial boundary value problem and of an optimal control that minimizes the cost functional. We derive an optimality system by formally differentiating the cost functional with respect to the control and evaluating the result at an optimal control.

  • PDF

Control of an stochastic nonlinear system by the method of dynamic programming

  • Choi, Wan-Sik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1994.10a
    • /
    • pp.156-161
    • /
    • 1994
  • In this paper, we consider an optimal control problem of a nonlinear stochastic system. Dynamic programming approach is employed for the formulation of a stochastic optimal control problem. As an optimality condition, dynamic programming equation so called the Bellman equation is obtained, which seldom yields an analytical solution, even very difficult to solve numerically. We obtain the numerical solution of the Bellman equation using an algorithm based on the finite difference approximation and the contraction mapping method. Optimal controls are constructed through the solution process of the Bellman equation. We also construct a test case in order to investigate the actual performance of the algorithm.

  • PDF

An Experimental Study on the Stochastic Control of a Flexible Structural System (유연한 구조물의 확률론적 제어에 대한 실험적 연구)

  • Kim, Dae-Jung;Heo, Hoon
    • Journal of KSNVE
    • /
    • v.9 no.3
    • /
    • pp.502-508
    • /
    • 1999
  • Newly developed control methodology applied to dynamic system under random disturbance is investigated and its performance is verified experimentall. Flexible cantilever beam sticked with piezofilm sensor and piezoceramic actuator is modelled in physical domain. Dynamic moment equation for the system is derived via Ito's stochastic differential equation and F-P-K equation. Also system's characteristics in stochastic domain is analyzed simultaneously. LQG controller is designed and used in physical and stochastic domain as wall. It is shown experimentally that randomly excited beam on the base is controlled effectively by designed LQG controller in physical domain. By comparing the result with that of LQG controller designed in stochastic domain, it is shown that new control method, what we called $\ulcorner$Heo-stochastic controller design technique$\lrcorner$, has better performance than conventional ones as a controller.

  • PDF

Adaptive Immersion and Invariance Control of the Van der Pol Equation

  • Khovidhungij, Watcharapong;Santhanapipatkul, Ponesit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.706-709
    • /
    • 2005
  • We study the adaptive stabilization of the Van der Pol equation. A parameter update law is designed by the immersion and invariance method, and is used in conjunction with both the feedback linearization and backstepping control laws. Simulation results show that the responses obtained in the adaptive case are very similar to the known parameter case, and the parameter estimator converges to the true value.

  • PDF