• Title/Summary/Keyword: two-state Markov

Search Result 135, Processing Time 0.022 seconds

System Replacement Policy for A Partially Observable Markov Decision Process Model

  • Kim, Chang-Eun
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.16 no.2
    • /
    • pp.1-9
    • /
    • 1990
  • The control of deterioration processes for which only incomplete state information is available is examined in this study. When the deterioration is governed by a Markov process, such processes are known as Partially Observable Markov Decision Processes (POMDP) which eliminate the assumption that the state or level of deterioration of the system is known exactly. This research investigates a two state partially observable Markov chain in which only deterioration can occur and for which the only actions possible are to replace or to leave alone. The goal of this research is to develop a new jump algorithm which has the potential for solving system problems dealing with continuous state space Markov chains.

  • PDF

Estimation of Parameters of a Two-State Markov Process by Interval Sampling

  • Jang, Joong-Soon;Bai, Do-Sun
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.6 no.2
    • /
    • pp.57-64
    • /
    • 1981
  • This paper develops a method of modifying the usual maximum likelihood estimators of the parameters of a two state Markov process when the trajectory of the process can only he observed at regular epochs. The method utilizes the limiting behaviors of the process and the properties of state transition counts. An efficient adaptive strategy to be used together with the modified estimator is also proposed. The properties of the new estimators and the adaptive strategy are investigated using Monte Carlo simulation.

  • PDF

A Repair Process with Embedded Markov Chain

  • Lee, Eui-Yong;Munsup Seoh
    • Journal of the Korean Statistical Society
    • /
    • v.28 no.4
    • /
    • pp.515-522
    • /
    • 1999
  • A repair process of a system consisting of both perfect repairs and minimal repairs is introduced. The type of repair, when the system fails, is determined by an embedded two state Markov chain. We study several stochastic properties of the process including the preservation of ageing properties and the monotonicities of the time between successive repairs. After assigning repair costs to the process, we also show that an optimal repair policy uniquely exists, if the underlying life distribution of the system has DMRL.

  • PDF

Performance Evaluation of the WiMAX Network Based on Combining the 2D Markov Chain and MMPP Traffic Model

  • Saha, Tonmoy;Shufean, Md. Abu;Alam, Mahbubul;Islam, Md. Imdadul
    • Journal of Information Processing Systems
    • /
    • v.7 no.4
    • /
    • pp.653-678
    • /
    • 2011
  • WiMAX is intended for fourth generation wireless mobile communications where a group of users are provided with a connection and a fixed length queue. In present literature traffic of such network is analyzed based on the generator matrix of the Markov Arrival Process (MAP). In this paper a simple analytical technique of the two dimensional Markov chain is used to obtain the trajectory of the congestion of the network as a function of a traffic parameter. Finally, a two state phase dependent arrival process is considered to evaluate probability states. The entire analysis is kept independent of modulation and coding schemes.

Daily Rainfall Simulation by Rainfall Frequency and State Model of Markov Chain (강우 빈도와 마코프 연쇄의 상태모형에 의한 일 강우량 모의)

  • Jung, Young-Hun;Kim, Buyng-Sik;Kim, Hung Soo;Shim, Myung-Pil
    • Journal of Wetlands Research
    • /
    • v.5 no.2
    • /
    • pp.1-13
    • /
    • 2003
  • In Korea, most of the rainfalls have been concentrated in the flood season and the flood study has received more attention than low flow analysis. One of the reasons that the analysis of low flows has less attention is the lacks of the required data like daily rainfall and so we have used the stochastic processes such as pulse noise, exponential distribution, and state model of Markov chain for the rainfall simulation in short term such as daily. Especially this study will pay attention to the state model of Markov chain. The previous study had performed the simulation study by the state model without considerations of the flood and non-flood periods and without consideration of the frequency of rainfall for the period of a state. Therefore this study considers afore mentioned two cases and compares the results with the known state model. As the results, the RMSEs of the suggested and known models represent the similar results. However, the PRE(relative percentage error) shows the suggested model is better results.

  • PDF

Two-Dimensional POMDP-Based Opportunistic Spectrum Access in Time-Varying Environment with Fading Channels

  • Wang, Yumeng;Xu, Yuhua;Shen, Liang;Xu, Chenglong;Cheng, Yunpeng
    • Journal of Communications and Networks
    • /
    • v.16 no.2
    • /
    • pp.217-226
    • /
    • 2014
  • In this research, we study the problem of opportunistic spectrum access (OSA) in a time-varying environment with fading channels, where the channel state is characterized by both channel quality and the occupancy of primary users (PUs). First, a finite-state Markov channel model is introduced to represent a fading channel. Second, by probing channel quality and exploring the activities of PUs jointly, a two-dimensional partially observable Markov decision process framework is proposed for OSA. In addition, a greedy strategy is designed, where a secondary user selects a channel that has the best-expected data transmission rate to maximize the instantaneous reward in the current slot. Compared with the optimal strategy that considers future reward, the greedy strategy brings low complexity and relatively ideal performance. Meanwhile, the spectrum sensing error that causes the collision between a PU and a secondary user (SU) is also discussed. Furthermore, we analyze the multiuser situation in which the proposed single-user strategy is adopted by every SU compared with the previous one. By observing the simulation results, the proposed strategy attains a larger throughput than the previous works under various parameter configurations.

A computation method of reliability for preprocessing filters in the fire control system using Markov process and state transition probability matrix (Markov process 및 상태천이확률 행렬 계산을 통한 사격통제장치 전처리필터 신뢰성 산출 기법)

  • Kim, Jae-Hun;Lyou, Joon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.2 no.2
    • /
    • pp.131-139
    • /
    • 1999
  • An easy and efficient method is proposed for a computation of reliability of preprocessing filters in the fire control system when the sensor data are frequently unreliable depending on the operation environment. It computes state transition probability matrix after modeling filter states as a Markov process, and computing false alarm and detection probability of each filter state under the given sensor failure probability. It shows that two important indices such as distributed state probability and error variance can be derived easily for a reliability assessment of the given sensor fusion system.

  • PDF

Energy Harvesting in Multi-relay Multiuser Networks based on Two-step Selection Scheme

  • Guo, Weidong;Tian, Houyuan;Wang, Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4180-4196
    • /
    • 2017
  • In this paper, we analyze average capacity of an amplify-and-forward (AF) cooperative communication system model in multi-relay multiuser networks. In contrast to conventional cooperative networks, relays in the considered network have no embedded energy supply. They need to rely on the energy harvested from the signals broadcasted by the source for their cooperative information transmission. Based on this structure, a two-step selection scheme is proposed considering both channel state information (CSI) and battery status of relays. Assuming each relay has infinite or finite energy storage for accumulating the energy, we use the infinite or finite Markov chain to capture the evolution of relay batteries and certain simplified assumptions to reduce computational complexity of the Markov chain analysis. The approximate closed-form expressions for the average capacity of the proposed scheme are derived. All theoretical results are validated by numerical simulations. The impacts of the system parameters, such as relay or user number, energy harvesting threshold and battery size, on the capacity performance are extensively investigated. Results show that although the performance of our scheme is inferior to the optimal joint selection scheme, it is still a practical scheme because its complexity is much lower than that of the optimal scheme.

Oil Price Forecasting : A Markov Switching Approach with Unobserved Component Model

  • Nam, Si-Kyung;Sohn, Young-Woo
    • Management Science and Financial Engineering
    • /
    • v.14 no.2
    • /
    • pp.105-118
    • /
    • 2008
  • There are many debates on the topic of the relationship between oil prices and economic growth. Through the repeated processes of conformations and contractions on the subject, two main issues are developed; one is how to define and drive oil shocks from oil prices, and the other is how to specify an econometric model to reflect the asymmetric relations between oil prices and output growth. The study, thus, introduces the unobserved component model to pick up the oil shocks and a first-order Markov switching model to reflect the asymmetric features. We finally employ unique oil shock variables from the stochastic trend components of oil prices and adapt four lags of the mean growth Markov Switching model. The results indicate that oil shocks exert more impact to recessionary state than expansionary state and the supply-side oil shocks are more persistent and significant than the demand-side shocks.

Queueing System Operating in Random Environment as a Model of a Cell Operation

  • Kim, Chesoong;Dudin, Alexander;Dudina, Olga;Kim, Jiseung
    • Industrial Engineering and Management Systems
    • /
    • v.15 no.2
    • /
    • pp.131-142
    • /
    • 2016
  • We consider a multi-server queueing system without buffer and with two types of customers as a model of operation of a mobile network cell. Customers arrive at the system in the marked Markovian arrival flow. The service times of customers are exponentially distributed with parameters depending on the type of customer. A part of the available servers is reserved exclusively for service of first type customers. Customers who do not receive service upon arrival, can make repeated attempts. The system operation is influenced by random factors, leading to a change of the system parameters, including the total number of servers and the number of reserved servers. The behavior of the system is described by the multi-dimensional Markov chain. The generator of this Markov chain is constructed and the ergodicity condition is derived. Formulas for computation of the main performance measures of the system based on the stationary distribution of the Markov chain are derived. Numerical examples are presented.