• Title/Summary/Keyword: stochastic dynamic programming

Search Result 49, Processing Time 0.024 seconds

The Minimum-cost Network Selection Scheme to Guarantee the Periodic Transmission Opportunity in the Multi-band Maritime Communication System (멀티밴드 해양통신망에서 전송주기를 보장하는 최소 비용의 망 선택 기법)

  • Cho, Ku-Min;Yun, Chang-Ho;Kang, Chung-G
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2A
    • /
    • pp.139-148
    • /
    • 2011
  • This paper presents the minimum-cost network selection scheme which determines the transmission instance in the multi-band maritime communication system, so that the shipment-related real-time information can be transmitted within the maximum allowed period. The transmission instances and the corresponding network selection process are modeled by a Markov Decision Process (MDP), for the channel model in the 2-state Markov chain, which can be solved by stochastic dynamic programming. It derives the minimum-cost network selection rule, which can reduce the network cost significantly as compared with the straight-forward scheme with a periodic transmission.

Deriving Robust Reservoir Operation Policy under Changing Climate: Use of Robust Optimiziation with Stochastic Dynamic Programming

  • Kim, Gi Joo;Kim, Young-Oh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.171-171
    • /
    • 2020
  • Decision making strategies should consider both adaptiveness and robustness in order to deal with two main characteristics of climate change: non-stationarity and deep uncertainty. Especially, robust strategies are different from traditional optimal strategies in the sense that they are satisfactory over a wider range of uncertainty and may act as a key when confronting climate change. In this study, a new framework named Robust Stochastic Dynamic Programming (R-SDP) is proposed, which couples previously developed robust optimization (RO) into the objective function and constraint of SDP. Two main approaches of RO, feasibility robustness and solution robustness, are considered in the optimization algorithm and consequently, three models to be tested are developed: conventional-SDP (CSDP), R-SDP-Feasibility (RSDP-F), and R-SDP-Solution (RSDP-S). The developed models were used to derive optimal monthly release rules in a single reservoir, and multiple simulations of the derived monthly policy under inflow scenarios with varying mean and standard deviations are undergone. Simulation results were then evaluated with a wide range of evaluation metrics from reliability, resiliency, vulnerability to additional robustness measures. Evaluation results were finally visualized with advanced visualization tools that are used in multi-objective robust decision making (MORDM) framework. As a result, RSDP-F and RSDP-S models yielded more risk averse, or conservative, results than the CSDP model, and a trade-off relationship between traditional and robustness metrics was discovered.

  • PDF

OPTIMIZATION MODEL AND ALGORITHM OF THE TRAJECTORY OF HORIZONTAL WELL WITH PERTURBATION

  • LI AN;FENG ENMIN
    • Journal of applied mathematics & informatics
    • /
    • v.20 no.1_2
    • /
    • pp.391-399
    • /
    • 2006
  • In order to solve the optimization problem of designing the trajectory of three-dimensional horizontal well, we establish a multi-phase, nonlinear, stochastic dynamic system of the trajectory of horizontal well. We take the precision of hitting target and the total length of the trajectory as the performance index. By the integration of the state equation, this model can be transformed into a nonlinear stochastic programming. We discuss here the necessary conditions under which a local solution exists and depends in a continuous way on the parameter (perturbation). According to the properties we propose a revised Hooke-Jeeves algorithm and work out corresponding software to calculate the local solution of the nonlinear stochastic programming and the expectancy of the performance index. The numerical results illustrate the validity of the proposed model and algorithm.

Numerical Solution of an Elliptic Type H-J-B Equation Arising from Stochastic Optimal Control Problem (확률 최적 제어문제에서 발생되는 Elliptic Type H-J-B 방정식의 수치해)

  • Wan Sik Choi
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.6
    • /
    • pp.703-706
    • /
    • 1998
  • 본 논문에서는 확률 최적 제어문제에서 발생되는 Elliptic type H-J-B(Hamilton-Jacobi-Bellman) 방정식에 대한 수치해를 구하였다. 수치해를 구하기 위하여 Contraction 사상 및 유한차분법을 이용하였으며, 시스템은 It/sub ∧/ 형태의 Stochastic 방정식으로 취하였다. 수치해는 수학적인 테스트 케이스를 설정하여 검증하였으며, 최적제어 Map을 방정식의 해를 구하면서 동시에 구하였다.

  • PDF

A Stochastic LP Model a Multi-stage Production System with Random Yields (수율을 고려한 다단계 생산라인의 Stochastic LP 모형)

  • 최인찬;박광태
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.22 no.1
    • /
    • pp.51-58
    • /
    • 1997
  • In this paper, we propose a stochastic LP model for determining an optimal input quantity in a single-product multi-stage production system with random yields. Due to the random yields in our model, each stage of the production system can result in defective items, which can be re-processed or scrapped at certain costs. We assume that the random yield at each stage follows an independent discrete empirical distribution. Compared to dynamic programming models that prevail in the literature, our model can easily handle problems of larger sizes.

  • PDF

Optimal Control of Stochastic Bilinear Systems (확률적 이선형시스템의 최적제)

  • Hwang, Chun-Sik
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.31 no.7
    • /
    • pp.18-24
    • /
    • 1982
  • We derived an optimal control of the Stochastic Bilinear Systems. For that we, firstly, formulated stochastic bilinear system and estimated its state when the system state is not directly observable. Optimal control problem of this system is reviewed on the line of three optimization techniques. An optimal control is derived using Hamilton-Jacobi-Bellman equation via dynamic programming method. It consists of combination of linear and quadratic form in the state. This negative feedback control, also, makes the system stable as far as value function is chosen to be a Lyapunov function. Several other properties of this control are discussed.

  • PDF

A Study on Optimal Economic Operation of Hydro-reservoir System by Stochastic Dynamic Programming with Weekly Interval (주간 단위로한 확률론적 년간 최적 저수지 경제 운용에 관한 연구)

  • Song, Gil-Yong;Kim, Yeong-Tae;Han, Byeong-Yul
    • Proceedings of the KIEE Conference
    • /
    • 1987.11a
    • /
    • pp.106-108
    • /
    • 1987
  • Until now, inflow has been handled an independent log-normal random variable in the problem of planning the long-term operation of a multi-reservoir hydrothermal electric power generation system. This paper introduces the detail study for making rule curve by applying weekly time interval for handling inflows. The hydro system model consists of a set of reservoirs and ponds. Thermal units are modeld by one equivalent thermal unit. Objective is minimizing the total cost that the summation of the fuel cost of equivalent thermal unit at each time interval. For optimization, stochastic dynamic programming(SDP) algorithm using successive approximations is used.

  • PDF

Incorporating Climate Change Scenarios into Water Resources Management (기후 변화를 고려한 수자원 관리 기법)

  • Kim, Yeong-O
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.4
    • /
    • pp.407-413
    • /
    • 1998
  • This study reviewed the recent studies for the climate change impact on water resource systems and applied one of the techniques to a real reservoir system - the Skagit hydropower system in U.S.A. The technique assumed that the climate change results in ±5% change in monthly average and/or standard deviation of the observed inflows for the Skagit system. For each case of the altered average and standard deviation, an optimal operating policy was derived using s SDP(Stochastic Dynamic Programming) model and compared with the operating policy for the non-climate change case. The results showed that the oparating policy of the Skagit system is more sensitive to the change in the streamflow average than that in the streamflow standard deviation. The derived operating policies were also simulated using the synthetic streamflow scenarios and their average annual gains were compared as a performance index. To choose the best operating policy among the derived policies, a Bayesian decision strategy was also presented with an example. Keywords : climate change, reservoir operating policy, stochastic dynamic programming, Bayesian decision theory.

  • PDF

Application of Recent Approximate Dynamic Programming Methods for Navigation Problems (주행문제를 위한 최신 근사적 동적계획법의 적용)

  • Min, Dae-Hong;Jung, Keun-Woo;Kwon, Ki-Young;Park, Joo-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.737-742
    • /
    • 2011
  • Navigation problems include the task of determining the control input under various constraints for systems such as mobile robots subject to uncertain disturbance. Such tasks can be modeled as constrained stochastic control problems. In order to solve these control problems, one may try to utilize the dynamic programming(DP) methods which rely on the concept of optimal value function. However, in most real-world problems, this trial would give us many difficulties; for examples, the exact system model may not be known; the computation of the optimal control policy may be impossible; and/or a huge amount of computing resource may be in need. As a strategy to overcome the difficulties of DP, one can utilize ADP(approximate dynamic programming) methods, which find suboptimal control policies resorting to approximate value functions. In this paper, we apply recently proposed ADP methods to a class of navigation problems having complex constraints, and observe the resultant performance characteristics.