• Title/Summary/Keyword: Markov decision-making

Search Result 31, Processing Time 0.025 seconds

대학도서관의 복본수 결정기법에 관한 연구

  • 양재한
    • Journal of Korean Library and Information Science Society
    • /
    • v.13
    • /
    • pp.131-166
    • /
    • 1986
  • This study is designed to review the methods of duplicate copies decision making in the academic library. In this thesis, I surveyed queueing & markov model, statistical model, and simulation model. The contents of the study can be summarized as follows: 1) Queueing and markov model is used for one of duplicate copies decision-making methods. This model was suggested by Leimkuler, Morse, and Chen, etc. Leimkuler proposed growth model, storage model, and availability model through using system analysis method. Queueing theory is a n.0, pplied to Leimkuler's availability model. Morse ad Chen a n.0, pplied queueing and markov model to their theory. They used queueing theory for measuring satisfaction level and Markov model for predicting user demand. 2) Another model of duplicate copies decision-making methods is statistical model. This model is suggested by Grant and Sohn, Jung Pyo. Grant suggested a model with a formula to satisfy the user demand more than 95%, Sohn, Jung Pyo suggested a model with two formulars: one for duplicate copies decision-making by using standard deviation and the other for duplicate copies predicting by using coefficient of variation. 3) Simulation model is used for one of duplicate copies decision-making methods. This model is suggested by Buckland and Arms. Buckland considered both loan period and duplicate copies simultaneously in his simulation model. Arms suggested computer-simulation model as one of duplicate copies decision-making methods. These methods can help improve the efficiency of collection development and solve some problems (space, staff, budget, etc, ) of Korean academic libraries today.

  • PDF

Markov Decision Process-based Potential Field Technique for UAV Planning

  • MOON, CHAEHWAN;AHN, JAEMYUNG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.149-161
    • /
    • 2021
  • This study proposes a methodology for mission/path planning of an unmanned aerial vehicle (UAV) using an artificial potential field with the Markov Decision Process (MDP). The planning problem is formulated as an MDP. A low-resolution solution of the MDP is obtained and used to define an artificial potential field, which provides a continuous UAV mission plan. A numerical case study is conducted to demonstrate the validity of the proposed technique.

Markov Decision Process for Curling Strategies (MDP에 의한 컬링 전략 선정)

  • Bae, Kiwook;Park, Dong Hyun;Kim, Dong Hyun;Shin, Hayong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.42 no.1
    • /
    • pp.65-72
    • /
    • 2016
  • Curling is compared to the Chess because of variety and importance of strategies. For winning the Curling game, selecting optimal strategies at decision making points are important. However, there is lack of research on optimal strategies for Curling. 'Aggressive' and 'Conservative' strategies are common strategies of Curling; nevertheless, even those two strategies have never been studied before. In this study, Markov Decision Process would be applied for Curling strategy analysis. Those two strategies are defined as actions of Markov Decision Process. By solving the model, the optimal strategy could be found at any in-game states.

The Decision Making Strategy for Determining the Optimal Production Time : A Stochastic Process and NPV Approach (최적생산시기 결정을 위한 의사결정전략 : 추계적 과정과 순현재가치 접근)

  • Choi, Jong-Du
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.32 no.1
    • /
    • pp.147-160
    • /
    • 2007
  • In this paper, the optimal decision making strategy for resource management is viewed in terms of a combined strategy of planting and producing time. A model which can be used to determine the optimal management strategy is developed, and focuses on how to design the operation of a Markov chain so as to optimize its performance. This study estimated a dynamic stochastic model to compare alternative production style and used the net present value of returns to evaluate the scenarios. The managers in this study may be able to increase economic returns by delaying produce in order to market larder, more valuable commodities.

Seamless Mobility of Heterogeneous Networks Based on Markov Decision Process

  • Preethi, G.A.;Chandrasekar, C.
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.616-629
    • /
    • 2015
  • A mobile terminal will expect a number of handoffs within its call duration. In the event of a mobile call, when a mobile node moves from one cell to another, it should connect to another access point within its range. In case there is a lack of support of its own network, it must changeover to another base station. In the event of moving on to another network, quality of service parameters need to be considered. In our study we have used the Markov decision process approach for a seamless handoff as it gives the optimum results for selecting a network when compared to other multiple attribute decision making processes. We have used the network cost function for selecting the network for handoff and the connection reward function, which is based on the values of the quality of service parameters. We have also examined the constant bit rate and transmission control protocol packet delivery ratio. We used the policy iteration algorithm for determining the optimal policy. Our enhanced handoff algorithm outperforms other previous multiple attribute decision making methods.

A MARKOV DECISION PROCESSES FORMULATION FOR THE LINEAR SEARCH PROBLEM

  • Balkhi, Z.T.;Benkherouf, L.
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.19 no.1
    • /
    • pp.201-206
    • /
    • 1994
  • The linear search problem is concerned with finding a hiden target on the real line R. The position of the target governed by some probability distribution. It is desired to find the target in the least expected search time. This problem has been formulated as an optimization problem by a number of authors without making use of Markov Decision Process (MDP) theory. It is the aim of the paper to give a (MDP) formulation to the search problem which we feel is both natural and easy to follow.

  • PDF

Optimal Network Defense Strategy Selection Based on Markov Bayesian Game

  • Wang, Zengguang;Lu, Yu;Li, Xi;Nie, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5631-5652
    • /
    • 2019
  • The existing defense strategy selection methods based on game theory basically select the optimal defense strategy in the form of mixed strategy. However, it is hard for network managers to understand and implement the defense strategy in this way. To address this problem, we constructed the incomplete information stochastic game model for the dynamic analysis to predict multi-stage attack-defense process by combining Bayesian game theory and the Markov decision-making method. In addition, the payoffs are quantified from the impact value of attack-defense actions. Based on previous statements, we designed an optimal defense strategy selection method. The optimal defense strategy is selected, which regards defense effectiveness as the criterion. The proposed method is feasibly verified via a representative experiment. Compared to the classical strategy selection methods based on the game theory, the proposed method can select the optimal strategy of the multi-stage attack-defense process in the form of pure strategy, which has been proved more operable than the compared ones.

Development of Design Alternative Analysis Program Considering RAM Parameter and Cost (RAM 파라미터와 비용을 고려한 설계대안 분석 프로그램 개발)

  • Kim, Han-sol;Choi, Seong-Dae;Hur, Jang-wook
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.18 no.6
    • /
    • pp.1-8
    • /
    • 2019
  • Modern weapon systems are multifunctional, with capabilities for executing complex missions. However, they are required to be highly reliable, which increases their total cost of ownership. Because it is necessary to produce the best results within a limited budget, there is an increasing interest in development, acquisition, and maintenance costs. Consequently, there is a need for tools that calculate the lifecycle costs of weapons systems development to facilitate decision making. In this study, we propose a cost calculation function based on the Markov process simulator-a reliability, availability, and maintainability analysis tool developed by applying the Markov-Monte Carlo method-as an alternative to these requirements to facilitate decision-making in systems development.

Bayesian Model for Cost Estimation of Construction Projects

  • Kim, Sang-Yon
    • Journal of the Korea Institute of Building Construction
    • /
    • v.11 no.1
    • /
    • pp.91-99
    • /
    • 2011
  • Bayesian network is a form of probabilistic graphical model. It incorporates human reasoning to deal with sparse data availability and to determine the probabilities of uncertain cases. In this research, bayesian network is adopted to model the problem of construction project cost. General information, time, cost, and material, the four main factors dominating the characteristic of construction costs, are incorporated into the model. This research presents verify a model that were conducted to illustrate the functionality and application of a decision support system for predicting the costs. The Markov Chain Monte Carlo (MCMC) method is applied to estimate parameter distributions. Furthermore, it is shown that not all the parameters are normally distributed. In addition, cost estimates based on the Gibbs output is performed. It can enhance the decision the decision-making process.

Optimal Bayesian MCMC based fire brigade non-suppression probability model considering uncertainty of parameters

  • Kim, Sunghyun;Lee, Sungsu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.8
    • /
    • pp.2941-2959
    • /
    • 2022
  • The fire brigade non-suppression probability model is a major factor that should be considered in evaluating fire-induced risk through fire probabilistic risk assessment (PRA), and also uncertainty is a critical consideration in support of risk-informed performance-based (RIPB) fire protection decision-making. This study developed an optimal integrated probabilistic fire brigade non-suppression model considering uncertainty of parameters based on the Bayesian Markov Chain Monte Carlo (MCMC) approach on electrical fire which is one of the most risk significant contributors. The result shows that the log-normal probability model with a location parameter (µ) of 2.063 and a scale parameter (σ) of 1.879 is best fitting to the actual fire experience data. It gives optimal model adequacy performance with Bayesian information criterion (BIC) of -1601.766, residual sum of squares (RSS) of 2.51E-04, and mean squared error (MSE) of 2.08E-06. This optimal log-normal model shows the better performance of the model adequacy than the exponential probability model suggested in the current fire PRA methodology, with a decrease of 17.3% in BIC, 85.3% in RSS, and 85.3% in MSE. The outcomes of this study are expected to contribute to the improvement and securement of fire PRA realism in the support of decision-making for RIPB fire protection programs.