• Title/Summary/Keyword: Markov Process

Search Result 618, Processing Time 0.026 seconds

LIMITING PROPERTIES FOR A MARKOV PROCESS GENERATED BY NONDECREASING CONCAVE FUNCTIONS ON $R_{n}^{+}$

  • Lee, Oe-Sook
    • Communications of the Korean Mathematical Society
    • /
    • v.9 no.3
    • /
    • pp.701-710
    • /
    • 1994
  • Suppose ${X_n}$ is a Markov process taking values in some arbitrary space $(S, \varphi)$ with n-stemp transition probability $$ P^{(n)}(x, B) = Prob(X_n \in B$\mid$X_0 = x), x \in X, B \in \varphi.$$ We shall call a Markov process with transition probabilities $P{(n)}(x, B)$ $\phi$-irreducible for some non-trivial $\sigma$-finite measure $\phi$ on $\varphi$ if whenever $\phi(B) > 0$, $$ \sum^{\infty}_{n=1}{2^{-n}P^{(n)}}(x, B) > 0, for every x \in S.$$ A non-trivial $\sigma$-finite measure $\pi$ on $\varphi$ is called invariant for ${X_n}$ if $$ \int{P(x, B)\pi(dx) = \pi(B)}, B \in \varphi $$.

  • PDF

Economic Adjustment Design For $\bar{X}$ Control Chart: A Markov Chain Approach

  • Yang, Su-Fen
    • International Journal of Quality Innovation
    • /
    • v.2 no.2
    • /
    • pp.136-144
    • /
    • 2001
  • The Markov Chain approach is used to develop an economic adjustment model of a process whose quality can be affected by a single special cause, resulting in changes of the process mean by incorrect adjustment of the process when it is operating according to its capability. The $\bar{X}$ control chart is thus used to signal the special cause. It is demonstrated that the expressions for the expected cycle time and the expected cycle cost are easier to obtain by the proposed approach than by adopting that in Collani, Saniga and Weigang (1994). Furthermore, this approach would be easily extended to derive the expected cycle cost and the expected cycle time for the case of multiple special causes or multiple control charts. A numerical example illustrates the proposed method and its application.

  • PDF

A computation method of reliability for preprocessing filters in the fire control system using Markov process and state transition probability matrix (Markov process 및 상태천이확률 행렬 계산을 통한 사격통제장치 전처리필터 신뢰성 산출 기법)

  • Kim, Jae-Hun;Lyou, Joon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.2 no.2
    • /
    • pp.131-139
    • /
    • 1999
  • An easy and efficient method is proposed for a computation of reliability of preprocessing filters in the fire control system when the sensor data are frequently unreliable depending on the operation environment. It computes state transition probability matrix after modeling filter states as a Markov process, and computing false alarm and detection probability of each filter state under the given sensor failure probability. It shows that two important indices such as distributed state probability and error variance can be derived easily for a reliability assessment of the given sensor fusion system.

  • PDF

Bayesian Texture Segmentation Using Multi-layer Perceptron and Markov Random Field Model (다층 퍼셉트론과 마코프 랜덤 필드 모델을 이용한 베이지안 결 분할)

  • Kim, Tae-Hyung;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This paper presents a novel texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields in multiscale Bayesian framework. Multiscale wavelet coefficients are used as input for the neural networks. The output of the neural network is modeled as a posterior probability. Texture classification at each scale is performed by the posterior probabilities from MLP networks and MAP (maximum a posterior) classification. Then, in order to obtain the more improved segmentation result at the finest scale, our proposed method fuses the multiscale MAP classifications sequentially from coarse to fine scales. This process is done by computing the MAP classification given the classification at one scale and a priori knowledge regarding contextual information which is extracted from the adjacent coarser scale classification. In this fusion process, the MRF (Markov random field) prior distribution and Gibbs sampler are used, where the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. The proposed segmentation method shows better performance than texture segmentation using the HMT (Hidden Markov trees) model and HMTseg.

A Combined Process Control Procedure by Monitoring and Repeated Adjustment

  • Park, Changsoon
    • Communications for Statistical Applications and Methods
    • /
    • v.7 no.3
    • /
    • pp.773-788
    • /
    • 2000
  • Statistical process control (SPC) and engineering process control (EPC) are based on different strategies for processes quality improvement. SPC reduces process variability by detecting and eliminating special causes of process variation. while EPC reduces process variability by adjusting compensatory variables to keep the quality variable close to target. Recently there has been needs for a process control proceduce which combines the tow strategies. This paper considers a combined scheme which simultaneously applies SPC and EPC techniques to reduce the variation of a process. The process model under consideration is an integrated moving average(IMA) process with a step shift. The EPC part of the scheme adjusts the process back to target at every fixed monitoring intervals, which is referred to a repeated adjustment scheme. The SPC part of the scheme uses an exponentially weighted moving average(EWMA) of observed deviation from target to detect special causes. A Markov chain model is developed to relate the scheme's expected cost per unit time to the design parameters of he combined control scheme. The expected cost per unit time is composed of off-target cost, adjustment cost, monitoring cost, and false alarm cost.

  • PDF

Application Markov State Model for the RCM of Combustion turbine Generating Unit (Markov State Model을 이용한 복합화력 발전설비의 최적의 유지보수계획 수립)

  • Shin, Jun-Seok;Lee, Seung-Hyuk;Kim, Jin-O
    • Proceedings of the KIEE Conference
    • /
    • 2006.11a
    • /
    • pp.357-359
    • /
    • 2006
  • Traditional time based preventive maintenance is used to constant maintenance interval for equipment life. In order to consider economic aspect for time based preventive maintenance, preventive maintenance is scheduled by RCM(Reliability-Centered Maintenance) evaluation. So, Markov state model is utilized considering stochastic state in RCM. In this paper, a Markov state model which can be used for scheduling and optimization of maintenance is presented. The deterioration process of system condition is modeled by a Markov model. In case study, simulation results about RCM are used to the real historical data of combustion turbine generating units in Korean power systems.

  • PDF

Markov 과정을 이용한 디지탈 교환기의 신뢰도 모형

  • Sin, Seong-Mun;Choe, Tae-Gu;Lee, Dae-Gi
    • ETRI Journal
    • /
    • v.5 no.2
    • /
    • pp.3-8
    • /
    • 1983
  • This paper derives the Markov model to calculate the reliability of the Digital Switching System being developed by KETRI. Using the failure states extracted from the system in the course of the modelling, we calculated the reliability of both the service grade and the function of the system. Especially, by including the repair rate into the model, we took optimum advantage of theMarkov process and solved the difficulties in the calculation by reducing the number of states of the system.

  • PDF

A study of guiding probability applied markov-chain (Markov 연쇄를 적용한 확률지도연구)

  • Lee Tae-Gyu
    • The Mathematical Education
    • /
    • v.25 no.1
    • /
    • pp.1-8
    • /
    • 1986
  • It is a common saying that markov-chain is a special case of probability course. That is to say, It means an unchangeable markov-chain process of the transition-probability of discontinuous time. There are two kinds of ways to show transition probability parade matrix theory. The first is the way by arrangement of a rightangled tetragon. The second part is a vertical measurement and direction sing by transition-circle. In this essay, I try to find out existence of procession for transition-probability applied markov-chain. And it is possible for me to know not only, what it is basic on a study of chain but also being applied to abnormal problems following a flow change and statistic facts expecting to use as a model of air expansion in physics.

  • PDF

An Efficient Simulation of Discrete Time Queueing Systems with Markov-modulated Arrival Processes (MMAP 이산시간 큐잉 시스템의 속산 시뮬레이션)

  • Kook Kwang-Ho;Kang Sungyeol
    • Journal of the Korea Society for Simulation
    • /
    • v.13 no.3
    • /
    • pp.1-10
    • /
    • 2004
  • The cell loss probability required in the ATM network is in the range of 10$^{-9}$ ∼10$^{-12}$ . If Monte Carlo simulation is used to analyze the performance of the ATM node, an enormous amount of computer time is required. To obtain large speed-up factors, importance sampling may be used. Since the Markov-modulated processes have been used to model various high-speed network traffic sources, we consider discrete time single server queueing systems with Markov-modulated arrival processes which can be used to model an ATM node. We apply importance sampling based on the Large Deviation Theory for the performance evaluation of, MMBP/D/1/K, ∑MMBP/D/1/K, and two stage tandem queueing networks with Markov-modulated arrival processes and deterministic service times. The simulation results show that the buffer overflow probabilities obtained by the importance sampling are very close to those obtained by the Monte Carlo simulation and the computer time can be reduced drastically.

  • PDF

Application of Markov Chains and Monte Carlo Simulations for Pavement Construction Engineering

  • Nega, Ainalem;Gedafa, Daba
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1043-1050
    • /
    • 2022
  • Markov chains and Monte Carlo Simulation were applied to account for the probabilistic nature of pavement deterioration over time using data collected in the field. The primary purpose of this study was to evaluate pavement network performance of Western Australia (WA) by applying the existing pavement management tools relevant to WA road construction networks. Two approaches were used to analyze the pavement networks: evaluating current pavement performance data to assess WA State Road networks and predicting the future states using past and current pavement data. The Markov chains process and Monte Carlo Simulation methods were used to predicting future conditions. The results indicated that Markov chains and Monte Carlo Simulation prediction models perform well compared to pavement performance data from the last four decades. The results also revealed the impact of design, traffic demand, and climate and construction standards on urban pavement performance. This study recommends an appropriate and effective pavement engineering management system for proper pavement design and analysis, preliminary planning, future pavement maintenance and rehabilitation, service life, and sustainable pavement construction functionality.

  • PDF