• Title/Summary/Keyword: Stochastic Dynamic Programming(SDP)

Search Result 9, Processing Time 0.02 seconds

Deriving Robust Reservoir Operation Policy under Changing Climate: Use of Robust Optimiziation with Stochastic Dynamic Programming

  • Kim, Gi Joo;Kim, Young-Oh
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.171-171
    • /
    • 2020
  • Decision making strategies should consider both adaptiveness and robustness in order to deal with two main characteristics of climate change: non-stationarity and deep uncertainty. Especially, robust strategies are different from traditional optimal strategies in the sense that they are satisfactory over a wider range of uncertainty and may act as a key when confronting climate change. In this study, a new framework named Robust Stochastic Dynamic Programming (R-SDP) is proposed, which couples previously developed robust optimization (RO) into the objective function and constraint of SDP. Two main approaches of RO, feasibility robustness and solution robustness, are considered in the optimization algorithm and consequently, three models to be tested are developed: conventional-SDP (CSDP), R-SDP-Feasibility (RSDP-F), and R-SDP-Solution (RSDP-S). The developed models were used to derive optimal monthly release rules in a single reservoir, and multiple simulations of the derived monthly policy under inflow scenarios with varying mean and standard deviations are undergone. Simulation results were then evaluated with a wide range of evaluation metrics from reliability, resiliency, vulnerability to additional robustness measures. Evaluation results were finally visualized with advanced visualization tools that are used in multi-objective robust decision making (MORDM) framework. As a result, RSDP-F and RSDP-S models yielded more risk averse, or conservative, results than the CSDP model, and a trade-off relationship between traditional and robustness metrics was discovered.

  • PDF

A Stochastic Dynamic Programming Model to Derive Monthly Operating Policy of a Multi-Reservoir System (댐 군 월별 운영 정책의 도출을 위한 추계적 동적 계획 모형)

  • Lim, Dong-Gyu;Kim, Jae-Hee;Kim, Sheung-Kown
    • Korean Management Science Review
    • /
    • v.29 no.1
    • /
    • pp.1-14
    • /
    • 2012
  • The goal of the multi-reservoir operation planning is to provide an optimal release plan that maximize the reservoir storage and hydropower generation while minimizing the spillages. However, the reservoir operation is difficult due to the uncertainty associated with inflows. In order to consider the uncertain inflows in the reservoir operating problem, we present a Stochastic Dynamic Programming (SDP) model based on the markov decision process (MDP). The objective of the model is to maximize the expected value of the system performance that is the weighted sum of all expected objective values. With the SDP model, multi-reservoir operating rule can be derived, and it also generates the steady state probabilities of reservoir storage and inflow as output. We applied the model to the Geum-river basin in Korea and could generate a multi-reservoir monthly operating plan that can consider the uncertainty of inflow.

Multiobjective R&D Investment Planning under Uncertainty (불확실한 상황하에서의 다복적 R & D 투자계획수립에 관한 연구-최적화 기법과 계층화 분석과정의 통합접 접근방안을 중심으로-)

  • 이영찬;민재형
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.20 no.2
    • /
    • pp.39-60
    • /
    • 1995
  • In this paper, an integration of stochastic dynamic programming (SDP), integer goal programming (IGP) and analytic hierarchy process (AHP) is proposed to handle multiobjective-multicriteria sequential decision making problems under uncertainty inherent in R & D investment planning. SDP has its capability to handle problems which are sequential and stochastic. In the SDP model, the probabilities of the funding levels in any time period are generated using a subjective model which employs functional relationships among interrelated parameters, scenarios of future budget availability and subjective inputs elicited from a group of decision makers. The SDP model primarily yields an optimal investment planning policy considering the possibility that actual funding received may be less than anticipated one and thus the projects being selected under the anticipated budget would be interrupted. IGP is used to handle the multiobjective issues such as tradoff between economic benefit and technology accumulation level. Other managerial concerns related to the determination of the optimal project portifolio within each stage of the SDP model. including project selection, project scheduling and annual budget allocation are also determined by the IGP. AHP is proposed for generating scenario-based transformation probabilities under budgetary uncertainty and for quantifying the environmental risk to be considered.

  • PDF

IMPROVING THE ESP ACCURACY WITH COMBINATION OF PROBABILISTIC FORECASTS

  • Yu, Seung-Oh;Kim, Young-Oh
    • Water Engineering Research
    • /
    • v.5 no.2
    • /
    • pp.101-109
    • /
    • 2004
  • Aggregating information by combining forecasts from two or more forecasting methods is an alternative to using forecasts from just a single method to improve forecast accuracy. This paper describes the development and use of a monthly inflow forecast model based on an optimal linear combination (OLC) of forecasts derived from naive, persistence, and Ensemble Streamflow Prediction (ESP) forecasts. Using the cross-validation technique, the OLC model made 1-month ahead probabilistic forecasts for the Chungju multi-purpose dam inflows for 15 years. For most of the verification months, the skill associated with the OLC forecast was superior to those drawn from the individual forecast techniques. Therefore this study demonstrates that OLC can improve the accuracy of the ESP forecast, especially during the dry season. This study also examined the value of the OLC forecasts in reservoir operations. Stochastic Dynamic Programming (SDP) derived the optimal operating policy for the Chungju multi-purpose dam operation and the derived policy was simulated using the 15-year observed inflows. The simulation results showed the SDP model that updated its probability from the new OLC forecast provided more efficient operation decisions than the conventional SDP model.

  • PDF

Deriving a Reservoir Operating Rule ENSO Information (ENSO 정보를 이용한 저수지 운영울의 산출)

  • Kim, Yeong-O
    • Journal of Korea Water Resources Association
    • /
    • v.33 no.5
    • /
    • pp.593-601
    • /
    • 2000
  • Analyzing monthly inflows of the Chung-Ju Dam associated with EI Nino Southern Oscillation (ENSO), Kim and Lee(2000) reported that the fall and winter inflows in EI Nino years tended to be low while those in La Nina years tended to be high. This study proposes a methodology of employing such a teleconnection between ENSO and inflow in reservoir operations. The ENSO information is used as a hydrologic state variable in stochastic dynamic programming (SDP) to derive a monthly optimal rule for operating the Chung- Ju Dam. An alternative operating rule is also derived with the SDP with no hydrologic state variable. Both of the SDP operating rules are simulated and compared to examine the value of using the ENSO information in operations of the Chung-Ju Dam. The simulation results show that the operating rule using the ENSO information increases energy generation and reliability of water supply as well as reduces spill. spill.

  • PDF

The Minimum-cost Network Selection Scheme to Guarantee the Periodic Transmission Opportunity in the Multi-band Maritime Communication System (멀티밴드 해양통신망에서 전송주기를 보장하는 최소 비용의 망 선택 기법)

  • Cho, Ku-Min;Yun, Chang-Ho;Kang, Chung-G
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2A
    • /
    • pp.139-148
    • /
    • 2011
  • This paper presents the minimum-cost network selection scheme which determines the transmission instance in the multi-band maritime communication system, so that the shipment-related real-time information can be transmitted within the maximum allowed period. The transmission instances and the corresponding network selection process are modeled by a Markov Decision Process (MDP), for the channel model in the 2-state Markov chain, which can be solved by stochastic dynamic programming. It derives the minimum-cost network selection rule, which can reduce the network cost significantly as compared with the straight-forward scheme with a periodic transmission.

A Study on Optimal Economic Operation of Hydro-reservoir System by Stochastic Dynamic Programming with Weekly Interval (주간 단위로한 확률론적 년간 최적 저수지 경제 운용에 관한 연구)

  • Song, Gil-Yong;Kim, Yeong-Tae;Han, Byeong-Yul
    • Proceedings of the KIEE Conference
    • /
    • 1987.11a
    • /
    • pp.106-108
    • /
    • 1987
  • Until now, inflow has been handled an independent log-normal random variable in the problem of planning the long-term operation of a multi-reservoir hydrothermal electric power generation system. This paper introduces the detail study for making rule curve by applying weekly time interval for handling inflows. The hydro system model consists of a set of reservoirs and ponds. Thermal units are modeld by one equivalent thermal unit. Objective is minimizing the total cost that the summation of the fuel cost of equivalent thermal unit at each time interval. For optimization, stochastic dynamic programming(SDP) algorithm using successive approximations is used.

  • PDF

Development of Robust-SDP for improving dam operation to cope with non-stationarity of climate change (기후변화의 비정상성 대비 댐 운영 개선을 위한 Robust-SDP의 개발)

  • Yoon, Hae Na;Seo, Seung Beom;Kim, Young-Oh
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.spc
    • /
    • pp.1135-1148
    • /
    • 2018
  • Previous studies on reservoir operation have been assumed that the climate in the future would be similar to that in the past. However, in the presence of climate non-stationarity, Robust Optimization (RO) which finds the feasible solutions under broader uncertainty is necessary. RO improves the existing optimization method by adding a robust term to the objective function that controls the uncertainty inherent due to input data instability. This study proposed Robust-SDP that combines Stochastic Dynamic Programming (SDP) and RO to estimate dam operation rules while coping with climate non-stationarity. The future inflow series that reflect climate non-stationarity were synthetically generated. We then evaluated the capacity of the dam operation rules obtained from the past inflow series based on six evaluation indicators and two decision support schemes. Although Robust-SDP was successful in reducing the incidence of extreme water scarcity events under climate non-stationarity, there was a trade-off between the number of extreme water scarcity events and the water scarcity ratio. Thus, it is proposed that decision-makers choose their optimal rules in reference to the evaluation results and decision support illustrations.

Incorporating Climate Change Scenarios into Water Resources Management (기후 변화를 고려한 수자원 관리 기법)

  • Kim, Yeong-O
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.4
    • /
    • pp.407-413
    • /
    • 1998
  • This study reviewed the recent studies for the climate change impact on water resource systems and applied one of the techniques to a real reservoir system - the Skagit hydropower system in U.S.A. The technique assumed that the climate change results in ±5% change in monthly average and/or standard deviation of the observed inflows for the Skagit system. For each case of the altered average and standard deviation, an optimal operating policy was derived using s SDP(Stochastic Dynamic Programming) model and compared with the operating policy for the non-climate change case. The results showed that the oparating policy of the Skagit system is more sensitive to the change in the streamflow average than that in the streamflow standard deviation. The derived operating policies were also simulated using the synthetic streamflow scenarios and their average annual gains were compared as a performance index. To choose the best operating policy among the derived policies, a Bayesian decision strategy was also presented with an example. Keywords : climate change, reservoir operating policy, stochastic dynamic programming, Bayesian decision theory.

  • PDF