• Title/Summary/Keyword: Markov

Search Result 2,429, Processing Time 0.028 seconds

Bayesian Conjugate Analysis for Transition Probabilities of Non-Homogeneous Markov Chain: A Survey

  • Sung, Minje
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.2
    • /
    • pp.135-145
    • /
    • 2014
  • The present study surveys Bayesian modeling structure for inferences about transition probabilities of Markov chain. The motivation of the study came from the data that shows transitional behaviors of emotionally disturbed children undergoing residential treatment program. Dirichlet distribution was used as prior for the multinomial distribution. The analysis with real data was implemented in WinBUGS programming environment. The performance of the model was compared to that of alternative approaches.

A Study on the Autocorrelation function for Markov Modulated Gaussian Process (마코프 조정 가우시안과정의 자기상관함수에 관한 연구)

  • 이혜연;장중순;신용백
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.25 no.6
    • /
    • pp.1-6
    • /
    • 2002
  • Most of process data control have been designed under the assumption that there are independence between observed data. However, it has been difficult to apply the traditional method to realtime data because they are autocorrelated, and they are not normally distributed. And the more, they have fluctuating means. Already the control method for these data was proposed by Markov Modulated Gaussian Process. Therefore, this study take into account MMGP's traits especially for the MMGP's autocorrelation.

Markov Decision Process-based Potential Field Technique for UAV Planning

  • MOON, CHAEHWAN;AHN, JAEMYUNG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.149-161
    • /
    • 2021
  • This study proposes a methodology for mission/path planning of an unmanned aerial vehicle (UAV) using an artificial potential field with the Markov Decision Process (MDP). The planning problem is formulated as an MDP. A low-resolution solution of the MDP is obtained and used to define an artificial potential field, which provides a continuous UAV mission plan. A numerical case study is conducted to demonstrate the validity of the proposed technique.

Optimal control of stochastic continuous discrete systems applied to FMS

  • Boukas, E.K.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1989.10a
    • /
    • pp.733-743
    • /
    • 1989
  • This paper deals with the control of system with controlled jump Markov disturbances. A such formulation was used by Boukas to model the planning production and maintenance of a FMS with failure machines. The optimal control problem of systems with controlled jump Markov process is addressed. This problem describes the planning production and preventive maintenance of production systems. The optimality conditions in both cases finite and infinite horizon, are derived. A numerical example is presented to validate the proposed results.

  • PDF

Evaluating the ANSS and ATS Values of the Multivariate EWMA Control Charts with Markov Chain Method

  • Chang, Duk-Joon
    • Journal of Integrative Natural Science
    • /
    • v.7 no.3
    • /
    • pp.200-207
    • /
    • 2014
  • Average number of samples to signal (ANSS) and average time to signal (ATS) are the most widely used criterion for comparing the efficiencies of the quality control charts. In this study the method of evaluating ANSS and ATS values of the multivariate exponentially weighted moving average (EWMA) control charts with Markov chain approach was presented when the production process is in control state or out of control state. Through numerical results, it is found that when the number of transient state r is less than 50, the calculated ANSS and ATS values are unstable; and ATS(r) tends to be stabilized when r is greater than 100; in addition, when the properties of multivariate EWMA control chart is evaluated using Markov chain method, the number of transient state r requires bigger values when the smoothing constatnt ${\lambda}$ becomes smaller.

Oil Price Forecasting : A Markov Switching Approach with Unobserved Component Model

  • Nam, Si-Kyung;Sohn, Young-Woo
    • Management Science and Financial Engineering
    • /
    • v.14 no.2
    • /
    • pp.105-118
    • /
    • 2008
  • There are many debates on the topic of the relationship between oil prices and economic growth. Through the repeated processes of conformations and contractions on the subject, two main issues are developed; one is how to define and drive oil shocks from oil prices, and the other is how to specify an econometric model to reflect the asymmetric relations between oil prices and output growth. The study, thus, introduces the unobserved component model to pick up the oil shocks and a first-order Markov switching model to reflect the asymmetric features. We finally employ unique oil shock variables from the stochastic trend components of oil prices and adapt four lags of the mean growth Markov Switching model. The results indicate that oil shocks exert more impact to recessionary state than expansionary state and the supply-side oil shocks are more persistent and significant than the demand-side shocks.

SOME LIMIT PROPERTIES OF RANDOM TRANSITION PROBABILITY FOR SECOND-ORDER NONHOMOGENEOUS MARKOV CHAINS ON GENERALIZED GAMBLING SYSTEM INDEXED BY A DOUBLE ROOTED TREE

  • Wang, Kangkang;Zong, Decai
    • Journal of applied mathematics & informatics
    • /
    • v.30 no.3_4
    • /
    • pp.541-553
    • /
    • 2012
  • In this paper, we study some limit properties of the harmonic mean of random transition probability for a second-order nonhomogeneous Markov chain on the generalized gambling system indexed by a tree by constructing a nonnegative martingale. As corollary, we obtain the property of the harmonic mean and the arithmetic mean of random transition probability for a second-order nonhomogeneous Markov chain indexed by a double root tree.

Markov Decision Process for Curling Strategies (MDP에 의한 컬링 전략 선정)

  • Bae, Kiwook;Park, Dong Hyun;Kim, Dong Hyun;Shin, Hayong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.42 no.1
    • /
    • pp.65-72
    • /
    • 2016
  • Curling is compared to the Chess because of variety and importance of strategies. For winning the Curling game, selecting optimal strategies at decision making points are important. However, there is lack of research on optimal strategies for Curling. 'Aggressive' and 'Conservative' strategies are common strategies of Curling; nevertheless, even those two strategies have never been studied before. In this study, Markov Decision Process would be applied for Curling strategy analysis. Those two strategies are defined as actions of Markov Decision Process. By solving the model, the optimal strategy could be found at any in-game states.

A study on the relation between stationarity and synthesized images for GMRF (GMRF 모델의 안정성과 합성 영상과의 관계에 관한 연구)

  • 김성이;최윤식
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.2
    • /
    • pp.71-78
    • /
    • 1997
  • Markov random field models have extensively used in applications such as image segmentation and image restoration. In this paper, we consider the relation between the stationarity of parameters and the synthesized images for gauss-markov rnadom field which has the most popularly used among many MRF models. GMRF model, which is both wide-sense Markov and strict-sense markov, has AR representations and is also a kind of gibbs distribution. Therefore, we may approach in aspect of both AR models and gibbs models. We show the relation between the stationarity of parameters and the images which are synthesized by two approaching methods and derive the stationary regions of parameters in 1st order and isotropic 2nd order case.

  • PDF

The Realization of Artificial Life to Adapt The Environment by Using The Markov Model

  • Kim, Do-Wan;Park, Wong-Hun;Chung, Jin-Wook;Hoon Kang
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.513-516
    • /
    • 2003
  • In this paper, we designed a Artificial Life(AL) that acts the appropriate actions according to the user's action, environments and AL's feeling. To realize this AL, we used the Markov Model. We consisted of the chromosome by Markov Model and obtained the appropriate actions by Genetic Algorithm.

  • PDF