• Title/Summary/Keyword: Markov Process

Search Result 618, Processing Time 0.022 seconds

A Markov Chain Representation of Statistical Process Monitoring Procedure under an ARIMA(0,1,1) Model (ARIMA(0,1,1)모형에서 통계적 공정탐색절차의 MARKOV연쇄 표현)

  • 박창순
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.71-85
    • /
    • 2003
  • In the economic design of the process control procedure, where quality is measured at certain time intervals, its properties are difficult to derive due to the discreteness of the measurement intervals. In this paper a Markov chain representation of the process monitoring procedure is developed and used to derive its properties when the process follows an ARIMA(0,1,1) model, which is designed to describe the effect of the noise and the special cause in the process cycle. The properties of the Markov chain depend on the transition matrix, which is determined by the control procedure and the process distribution. The derived representation of the Markov chain can be adapted to most different types of control procedures and different kinds of process distributions by obtaining the corresponding transition matrix.

Average run length calculation of the EWMA control chart using the first passage time of the Markov process (Markov 과정의 최초통과시간을 이용한 지수가중 이동평균 관리도의 평균런길이의 계산)

  • Park, Changsoon
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.1-12
    • /
    • 2017
  • Many stochastic processes satisfy the Markov property exactly or at least approximately. An interested property in the Markov process is the first passage time. Since the sequential analysis by Wald, the approximation of the first passage time has been studied extensively. The Statistical computing technique due to the development of high-speed computers made it possible to calculate the values of the properties close to the true ones. This article introduces an exponentially weighted moving average (EWMA) control chart as an example of the Markov process, and studied how to calculate the average run length with problematic issues that should be cautioned for correct calculation. The results derived for approximation of the first passage time in this research can be applied to any of the Markov processes. Especially the approximation of the continuous time Markov process to the discrete time Markov chain is useful for the studies of the properties of the stochastic process and makes computational approaches easy.

Parametric Sensitivity Analysis of Markov Process Based RAM Model (Markov Process 기반 RAM 모델에 대한 파라미터 민감도 분석)

  • Kim, Yeong Seok;Hur, Jang Wook
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.14 no.1
    • /
    • pp.44-51
    • /
    • 2018
  • The purpose of RAM analysis in weapon systems is to reduce life cycle costs, along with improving combat readiness by meeting RAM target value. We analyzed the sensitivity of the RAM analysis parameters to the use of the operating system by using the Markov Process based model (MPS, Markov Process Simulation) developed for RAM analysis. A Markov process-based RAM analysis model was developed to analyze the sensitivity of parameters (MTBF, MTTR and ALDT) to the utility of the 81mm mortar. The time required for the application to reach the steady state is about 15,000H, which is about 2 years, and the sensitivity of the parameter is highest for ALDT. In order to improve combat readiness, there is a need for continuous improvement in ALDT.

SOME LIMIT THEOREMS FOR POSITIVE RECURRENT AGE-DEPENDENT BRANCHING PROCESSES

  • Kang, Hye-Jeong
    • Journal of the Korean Mathematical Society
    • /
    • v.38 no.1
    • /
    • pp.25-35
    • /
    • 2001
  • In this paper we consider an age dependent branching process whose particles move according to a Markov process with continuous state space. The Markov process is assumed to the stationary with independent increments and positive recurrent. We find some sufficient conditions for he Markov motion process such that the empirical distribution of the positions converges to the limiting distribution of the motion process.

  • PDF

Waiting Times in Polling Systems with Markov-Modulated Poisson Process Arrival

  • Kim, D. W.;W. Ryu;K. P. Jun;Park, B. U.;H. D. Bae
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.3
    • /
    • pp.355-363
    • /
    • 1997
  • In queueing theory, polling systems have been widely studied as a way of serving several stations in cyclic order. In this paper we consider Markov-modulated Poisson process which is useful for approximating a superposition of heterogeneous arrivals. We derive the mean waiting time of each station in a polling system where the arrival process is modeled by a Markov-modulated Poisson process.

  • PDF

Stochastic convexity in markov additive processes (마코프 누적 프로세스에서의 확률적 콘벡스성)

  • 윤복식
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1991.10a
    • /
    • pp.147-159
    • /
    • 1991
  • Stochastic convexity(concvity) of a stochastic process is a very useful concept for various stochastic optimization problems. In this study we first establish stochastic convexity of a certain class of Markov additive processes through the probabilistic construction based on the sample path approach. A Markov additive process is obtained by integrating a functional of the underlying Markov process with respect to time, and its stochastic convexity can be utilized to provide efficient methods for optimal design or for optimal operation schedule of a wide range of stochastic systems. We also clarify the conditions for stochatic monotonicity of the Markov process, which is required for stochatic convexity of the Markov additive process. This result shows that stochastic convexity can be used for the analysis of probabilistic models based on birth and death processes, which have very wide application area. Finally we demonstrate the validity and usefulness of the theoretical results by developing efficient methods for the optimal replacement scheduling based on the stochastic convexity property.

  • PDF

On The Mathematical Structure of Markov Process and Markovian Sequential Decision Process (Markov 과정(過程)의 수리적(數理的) 구조(構造)와 그 축차결정과정(逐次決定過程))

  • Kim, Yu-Song
    • Journal of Korean Society for Quality Management
    • /
    • v.11 no.2
    • /
    • pp.2-9
    • /
    • 1983
  • As will be seen, this paper is tries that the research on the mathematical structure of Markov process and Markovian sequential decision process (the policy improvement iteration method,) moreover, that it analyze the logic and the characteristic of behavior of mathematical model of Markov process. Therefore firstly, it classify, on research of mathematical structure of Markov process, the forward equation and backward equation of Chapman-kolmogorov equation and of kolmogorov differential equation, and then have survey on logic of equation systems or on the question of uniqueness and existence of solution of the equation. Secondly, it classify, at the Markovian sequential decision process, the case of discrete time parameter and the continuous time parameter, and then it explore the logic system of characteristic of the behavior, the value determination operation and the policy improvement routine.

  • PDF

ANALYZING THE DURATION OF SUCCESS AND FAILURE IN MARKOV-MODULATED BERNOULLI PROCESSES

  • Yoora Kim
    • Journal of the Korean Mathematical Society
    • /
    • v.61 no.4
    • /
    • pp.693-711
    • /
    • 2024
  • A Markov-modulated Bernoulli process is a generalization of a Bernoulli process in which the success probability evolves over time according to a Markov chain. It has been widely applied in various disciplines for modeling and analysis of systems in random environments. This paper focuses on providing analytical characterizations of the Markovmodulated Bernoulli process by introducing key metrics, including success period, failure period, and cycle. We derive expressions for the distributions and the moments of these metrics in terms of the model parameters.

An Improved Reinforcement Learning Technique for Mission Completion (임무수행을 위한 개선된 강화학습 방법)

  • 권우영;이상훈;서일홍
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.9
    • /
    • pp.533-539
    • /
    • 2003
  • Reinforcement learning (RL) has been widely used as a learning mechanism of an artificial life system. However, RL usually suffers from slow convergence to the optimum state-action sequence or a sequence of stimulus-response (SR) behaviors, and may not correctly work in non-Markov processes. In this paper, first, to cope with slow-convergence problem, if some state-action pairs are considered as disturbance for optimum sequence, then they no to be eliminated in long-term memory (LTM), where such disturbances are found by a shortest path-finding algorithm. This process is shown to let the system get an enhanced learning speed. Second, to partly solve a non-Markov problem, if a stimulus is frequently met in a searching-process, then the stimulus will be classified as a sequential percept for a non-Markov hidden state. And thus, a correct behavior for a non-Markov hidden state can be learned as in a Markov environment. To show the validity of our proposed learning technologies, several simulation result j will be illustrated.