• Title/Summary/Keyword: Markov Process

Search Result 618, Processing Time 0.02 seconds

Performance Evaluation of the WiMAX Network Based on Combining the 2D Markov Chain and MMPP Traffic Model

  • Saha, Tonmoy;Shufean, Md. Abu;Alam, Mahbubul;Islam, Md. Imdadul
    • Journal of Information Processing Systems
    • /
    • v.7 no.4
    • /
    • pp.653-678
    • /
    • 2011
  • WiMAX is intended for fourth generation wireless mobile communications where a group of users are provided with a connection and a fixed length queue. In present literature traffic of such network is analyzed based on the generator matrix of the Markov Arrival Process (MAP). In this paper a simple analytical technique of the two dimensional Markov chain is used to obtain the trajectory of the congestion of the network as a function of a traffic parameter. Finally, a two state phase dependent arrival process is considered to evaluate probability states. The entire analysis is kept independent of modulation and coding schemes.

Bounding Methods for Markov Processes Based on Stochastic Monotonicity and Convexity (확률적 단조성과 콘벡스성을 이용한 마코프 프로세스에서의 범위한정 기법)

  • Yoon, Bok-Sik
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.17 no.1
    • /
    • pp.117-126
    • /
    • 1991
  • When {X(t), t ${\geq}$ 0} is a Markov process representing time-varying system states, we develop efficient bounding methods for some time-dependent performance measures. We use the discretization technique for stochastically monotone Markov processes and a combination of discretization and uniformization for Markov processes with the stochastic convexity(concavity) property. Sufficient conditions for stochastic monotonocity and stochastic convexity of a Markov process are also mentioned. A simple example is given to demonstrate the validity of the bounding methods.

  • PDF

Equivalent Transformations of Undiscounted Nonhomogeneous Markov Decision Processes

  • Park, Yun-Sun
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.17 no.2
    • /
    • pp.131-144
    • /
    • 1992
  • Even though nonhomogeneous Markov Decision Processes subsume homogeneous Markov Decision Processes and are more practical in the real world, there are many results for them. In this paper we address the nonhomogeneous Markov Decision Process with objective to maximize average reward. By extending works of Ross [17] in the homogeneous case adopting the result of Bean and Smith [3] for the dicounted deterministic problem, we first transform the original problem into the discounted nonhomogeneous Markov Decision Process. Then, secondly, we transform into the discounted deterministic problem. This approach not only shows the interrelationships between various problems but also attacks the solution method of the undiscounted nohomogeneous Markov Decision Process.

  • PDF

Implementation of Markov Chain: Review and New Application (관리도에서 Markov연쇄의 적용: 복습 및 새로운 응용)

  • Park, Chang-Soon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.657-676
    • /
    • 2011
  • Properties of statistical process control procedures may not be derived analytically in many cases; however, the application of a Markov chain can solve such problems. This article shows how to derive the properties of the process control procedures using the generated Markov chains when the control statistic satisfies the Markov property. Markov chain approaches that appear in the literature (such as the statistical design and economic design of the control chart as well as the variable sampling rate design) are reviewed along with the introduction of research results for application to a new control procedure and reset chart. The joint application of a Markov chain approach and analytical solutions (when available) can guarantee the correct derivation of the properties. A Markov chain approach is recommended over simulation studies due to its precise derivation of properties and short calculation times.

Implementation of Markov chain: Review and new application (관리도에서 Markov연쇄의 적용: 복습 및 새로운 응용)

  • Park, Changsoon
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.4
    • /
    • pp.537-556
    • /
    • 2021
  • Properties of statistical process control procedures may not be derived analytically in many cases; however, the application of a Markov chain can solve such problems. This article shows how to derive the properties of the process control procedures using the generated Markov chains when the control statistic satisfies the Markov property. Markov chain approaches that appear in the literature (such as the statistical design and economic design of the control chart as well as the variable sampling rate design) are reviewed along with the introduction of research results for application to a new control procedure and reset chart. The joint application of a Markov chain approach and analytical solutions (when available) can guarantee the correct derivation of the properties. A Markov chain approach is recommended over simulation studies due to its precise derivation of properties and short calculation times.

System Availability Analysis using Markov Process (Markov Process를 활용한 시스템 가용도 분석 연구)

  • Kim, Han Sol;Kim, Bo Hyeon;Hur, Jang Wook
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.36-43
    • /
    • 2018
  • The availability of the weapon system can be analyzed through state modeling and simulation using the Markov process. In this paper, show how to analyze the availability of the weapon system and can use the Markov process to analyze the system's steady state as well as the RAM at a transient state in time. As a result of the availability analysis of tracked vehicles, the inherent availability was 2.6% and the operational availability was 1.2% The validity criterion was defined as the case where the difference was within 3%, and thus it was judged to be valid. We have identified the faulty items through graphs of the number of visits per state among the results obtained through the MPS and can use them to provide design alternatives.

Stochastic convexity in Markov additive processes and its applications (마코프 누적 프로세스에서의 확률적 콘벡스성과 그 응용)

  • 윤복식
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.16 no.1
    • /
    • pp.76-88
    • /
    • 1991
  • Stochastic convexity (concavity) of a stochastic process is a very useful concept for various stochastic optimization problems. In this study we first establish stochastic convexity of a certain class of Markov additive processes through probabilistic construction based on the sample path approach. A Markov additive process is abtained by integrating a functional of the underlying Markov process with respect to time, and its stochastic convexity can be utilized to provide efficient methods for optimal design or optimal operation schedule wide range of stochastic systems. We also clarify the conditions for stochastic monotonicity of the Markov process. From the result it is shown that stachstic convexity can be used for the analysis of probabilitic models based on birth and death processes, which have very wide applications area. Finally we demonstrate the validity and usefulness of the theoretical results by developing efficient methods for the optimal replacement scheduling based on the stochastic convexity property.

  • PDF

Application of GTH-like algorithm to Markov modulated Brownian motion with jumps

  • Hong, Sung-Chul;Ahn, Soohan
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.477-491
    • /
    • 2021
  • The Markov modulated Brownian motion is a substantial generalization of the classical Brownian Motion. On the other hand, the Markovian arrival process (MAP) is a point process whose family is dense for any stochastic point process and is used to approximate complex stochastic counting processes. In this paper, we consider a superposition of the Markov modulated Brownian motion (MMBM) and the Markovian arrival process of jumps which are distributed as the bilateral ph-type distribution, the class of which is also dense in the space of distribution functions defined on the whole real line. In the model, we assume that the inter-arrival times of the MAP depend on the underlying Markov process of the MMBM. One of the subjects of this paper is introducing how to obtain the first passage probabilities of the superposed process using a stochastic doubling algorithm designed for getting the minimal solution of a nonsymmetric algebraic Riccatti equation. The other is to provide eigenvalue and eigenvector results on the superposed process to make it possible to apply the GTH-like algorithm, which improves the accuracy of the doubling algorithm.