• Title/Summary/Keyword: Markov chain 1

Search Result 304, Processing Time 0.031 seconds

A Markov Chain Representation of Statistical Process Monitoring Procedure under an ARIMA(0,1,1) Model (ARIMA(0,1,1)모형에서 통계적 공정탐색절차의 MARKOV연쇄 표현)

  • 박창순
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.71-85
    • /
    • 2003
  • In the economic design of the process control procedure, where quality is measured at certain time intervals, its properties are difficult to derive due to the discreteness of the measurement intervals. In this paper a Markov chain representation of the process monitoring procedure is developed and used to derive its properties when the process follows an ARIMA(0,1,1) model, which is designed to describe the effect of the noise and the special cause in the process cycle. The properties of the Markov chain depend on the transition matrix, which is determined by the control procedure and the process distribution. The derived representation of the Markov chain can be adapted to most different types of control procedures and different kinds of process distributions by obtaining the corresponding transition matrix.

A Study of Image Target Tracking Using ITS in an Occluding Environment (표적이 일시적으로 가려지는 환경에서 ITS 기법을 이용한 영상 표적 추적 알고리듬 연구)

  • Kim, Yong;Song, Taek-Lyul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.4
    • /
    • pp.306-314
    • /
    • 2013
  • Automatic tracking in cluttered environment requires the initiation and maintenance of tracks, and track existence probability of true track is kept by Markov Chain Two model of target existence propagation. Unlike Markov Chain One model for target existence propagation, Markov Chain Two model is made up three hypotheses about target existence event which are that the target exist and is detectable, the target exists and is non-detectable through occlusion, and the target does not exist and is non-detectable according to non-existing target. In this paper we present multi-scan single target tracking algorithm based on the target existence, which call the Integrated Track Splitting algorithm with Markov Chain Two model in imaging sensor.

A study of guiding probability applied markov-chain (Markov 연쇄를 적용한 확률지도연구)

  • Lee Tae-Gyu
    • The Mathematical Education
    • /
    • v.25 no.1
    • /
    • pp.1-8
    • /
    • 1986
  • It is a common saying that markov-chain is a special case of probability course. That is to say, It means an unchangeable markov-chain process of the transition-probability of discontinuous time. There are two kinds of ways to show transition probability parade matrix theory. The first is the way by arrangement of a rightangled tetragon. The second part is a vertical measurement and direction sing by transition-circle. In this essay, I try to find out existence of procession for transition-probability applied markov-chain. And it is possible for me to know not only, what it is basic on a study of chain but also being applied to abnormal problems following a flow change and statistic facts expecting to use as a model of air expansion in physics.

  • PDF

Sensitivity of Conditions for Lumping Finite Markov Chains

  • Suh, Moon-Taek
    • Journal of the military operations research society of Korea
    • /
    • v.11 no.1
    • /
    • pp.111-129
    • /
    • 1985
  • Markov chains with large transition probability matrices occur in many applications such as manpowr models. Under certain conditions the state space of a stationary discrete parameter finite Markov chain may be partitioned into subsets, each of which may be treated as a single state of a smaller chain that retains the Markov property. Such a chain is said to be 'lumpable' and the resulting lumped chain is a special case of more general functions of Markov chains. There are several reasons why one might wish to lump. First, there may be analytical benefits, including relative simplicity of the reduced model and development of a new model which inherits known or assumed strong properties of the original model (the Markov property). Second, there may be statistical benefits, such as increased robustness of the smaller chain as well as improved estimates of transition probabilities. Finally, the identification of lumps may provide new insights about the process under investigation.

  • PDF

Study on Demand Estimation of Agricultural Machinery by Using Logistic Curve Function and Markov Chain Model (로지스틱함수법 및 Markov 전이모형법을 이용한 농업기계의 수요예측에 관한 연구)

  • Yun Y. D.
    • Journal of Biosystems Engineering
    • /
    • v.29 no.5 s.106
    • /
    • pp.441-450
    • /
    • 2004
  • This study was performed to estimate mid and long term demands of a tractor, a rice transplanter, a combine and a grain dryer by using logistic curve function and Markov chain model. Field survey was done to decide some parameters far logistic curve function and Markov chain model. Ceiling values of tractor and combine fer logistic curve function analysis were 209,280 and 85,607 respectively. Based on logistic curve function analysis, total number of tractors increased slightly during the period analysed. New demand for combine was found to be zero. Markov chain analysis was carried out with 2 scenarios. With the scenario 1(rice price $10\%$ down and current supporting policy by government), new demand for tractor was decreased gradually up to 700 unit in the year 2012. For combine, new demand was zero. Regardless of scenarios, the replacement demand was increased slightly after 2003. After then, the replacement demand is decreased after the certain time. Two analysis of logistic owe function and Markov chain model showed the similar trend in increase and decrease for total number of tractors and combines. However, the difference in numbers of tractors and combines between the results from 2 analysis got bigger as the time passed.

THE QUEUE LENGTH DISTRIBUTION OF PHASE TYPE

  • Lim, Jong-Seul;Ahn, Seong-Joon
    • Journal of applied mathematics & informatics
    • /
    • v.24 no.1_2
    • /
    • pp.505-511
    • /
    • 2007
  • In this paper, we examine the Markov chain $\{X_k,\;N_k;\;k=0,\;1,...$. We show that the marginal steady state distribution of Xk is discrete phase type. The implication of this result is that the queue length distribution of phase type for large number of examples where this Markov chain is applicable and shows a queueing application by matrix geometric methods.

On Weak Convergence of Some Rescaled Transition Probabilities of a Higher Order Stationary Markov Chain

  • Yun, Seok-Hoon
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.3
    • /
    • pp.313-336
    • /
    • 1996
  • In this paper we consider weak convergence of some rescaled transi-tion probabilities of a real-valued, k-th order (k $\geq$ 1) stationary Markov chain. Under the assumption that the joint distribution of K + 1 consecutive variables belongs to the domain of attraction of a multivariate extreme value distribution, the paper gives a sufficient condition for the weak convergence and characterizes the limiting distribution via the multivariate extreme value distribution.

  • PDF

Prediction of Mobile Phone Menu Selection with Markov Chains (Markov Chain을 이용한 핸드폰 메뉴 선택 예측)

  • Lee, Suk Won;Myung, Rohae
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.33 no.4
    • /
    • pp.402-409
    • /
    • 2007
  • Markov Chains has proven to be effective in predicting human behaviors in the areas of web site assess, multimedia educational system, and driving environment. In order to extend an application area of predicting human behaviors using Markov Chains, this study was conducted to investigate whether Markov Chains could be used to predict human behavior in selecting mobile phone menu item. Compared to the aforementioned application areas, this study has different aspects in using Markov Chains : m-order 1-step Markov Model and the concept of Power Law of Learning. The results showed that human behaviors in predicting mobile phone menu selection were well fitted into with m-order 1-step Markov Model and Power Law of Learning in allocating history path vector weights. In other words, prediction of mobile phone menu selection with Markov Chains was capable of user's actual menu selection.

A generalized regime-switching integer-valued GARCH(1, 1) model and its volatility forecasting

  • Lee, Jiyoung;Hwang, Eunju
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.1
    • /
    • pp.29-42
    • /
    • 2018
  • We combine the integer-valued GARCH(1, 1) model with a generalized regime-switching model to propose a dynamic count time series model. Our model adopts Markov-chains with time-varying dependent transition probabilities to model dynamic count time series called the generalized regime-switching integer-valued GARCH(1, 1) (GRS-INGARCH(1, 1)) models. We derive a recursive formula of the conditional probability of the regime in the Markov-chain given the past information, in terms of transition probabilities of the Markov-chain and the Poisson parameters of the INGARCH(1, 1) process. In addition, we also study the forecasting of the Poisson parameter as well as the cumulative impulse response function of the model, which is a measure for the persistence of volatility. A Monte-Carlo simulation is conducted to see the performances of volatility forecasting and behaviors of cumulative impulse response coefficients as well as conditional maximum likelihood estimation; consequently, a real data application is given.

A Generalized Markov Chain Model for IEEE 802.11 Distributed Coordination Function

  • Zhong, Ping;Shi, Jianghong;Zhuang, Yuxiang;Chen, Huihuang;Hong, Xuemin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.2
    • /
    • pp.664-682
    • /
    • 2012
  • To improve the accuracy and enhance the applicability of existing models, this paper proposes a generalized Markov chain model for IEEE 802.11 Distributed Coordination Function (DCF) under the widely adopted assumption of ideal transmission channel. The IEEE 802.11 DCF is modeled by a two dimensional Markov chain, which takes into account unsaturated traffic, backoff freezing, retry limits, the difference between maximum retransmission count and maximum backoff exponent, and limited buffer size based on the M/G/1/K queuing model. We show that existing models can be treated as special cases of the proposed generalized model. Furthermore, simulation results validate the accuracy of the proposed model.