• Title/Summary/Keyword: Markov process model

Search Result 371, Processing Time 0.028 seconds

Non-Simultaneous Sampling Deactivation during the Parameter Approximation of a Topic Model

  • Jeong, Young-Seob;Jin, Sou-Young;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.81-98
    • /
    • 2013
  • Since Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) were introduced, many revised or extended topic models have appeared. Due to the intractable likelihood of these models, training any topic model requires to use some approximation algorithm such as variational approximation, Laplace approximation, or Markov chain Monte Carlo (MCMC). Although these approximation algorithms perform well, training a topic model is still computationally expensive given the large amount of data it requires. In this paper, we propose a new method, called non-simultaneous sampling deactivation, for efficient approximation of parameters in a topic model. While each random variable is normally sampled or obtained by a single predefined burn-in period in the traditional approximation algorithms, our new method is based on the observation that the random variable nodes in one topic model have all different periods of convergence. During the iterative approximation process, the proposed method allows each random variable node to be terminated or deactivated when it is converged. Therefore, compared to the traditional approximation ways in which usually every node is deactivated concurrently, the proposed method achieves the inference efficiency in terms of time and memory. We do not propose a new approximation algorithm, but a new process applicable to the existing approximation algorithms. Through experiments, we show the time and memory efficiency of the method, and discuss about the tradeoff between the efficiency of the approximation process and the parameter consistency.

Approximate Dynamic Programming Based Interceptor Fire Control and Effectiveness Analysis for M-To-M Engagement (근사적 동적계획을 활용한 요격통제 및 동시교전 효과분석)

  • Lee, Changseok;Kim, Ju-Hyun;Choi, Bong Wan;Kim, Kyeongtaek
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.4
    • /
    • pp.287-295
    • /
    • 2022
  • As low altitude long-range artillery threat has been strengthened, the development of anti-artillery interception system to protect assets against its attacks will be kicked off. We view the defense of long-range artillery attacks as a typical dynamic weapon target assignment (DWTA) problem. DWTA is a sequential decision process in which decision making under future uncertain attacks affects the subsequent decision processes and its results. These are typical characteristics of Markov decision process (MDP) model. We formulate the problem as a MDP model to examine the assignment policy for the defender. The proximity of the capital of South Korea to North Korea border limits the computation time for its solution to a few second. Within the allowed time interval, it is impossible to compute the exact optimal solution. We apply approximate dynamic programming (ADP) approach to check if ADP approach solve the MDP model within processing time limit. We employ Shoot-Shoot-Look policy as a baseline strategy and compare it with ADP approach for three scenarios. Simulation results show that ADP approach provide better solution than the baseline strategy.

An Adaptive Moving Average (A-MA) Control Chart with Variable Sampling Intervals (VSI) (가변 샘플링 간격(VSI)을 갖는 적응형 이동평균 (A-MA) 관리도)

  • Lim, Tae-Jin
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.33 no.4
    • /
    • pp.457-468
    • /
    • 2007
  • This paper proposes an adaptive moving average (A-MA) control chart with variable sampling intervals (VSI) for detecting shifts in the process mean. The basic idea of the VSI A-MA chart is to adjust sampling intervals as well as to accumulate previous samples selectively in order to increase the sensitivity. The VSI A-MA chart employs a threshold limit to determine whether or not to increase sampling rate as well as to accumulate previous samples. If a standardized control statistic falls outside the threshold limit, the next sample is taken with higher sampling rate and is accumulated to calculate the next control statistic. If the control statistic falls within the threshold limit, the next sample is taken with lower sampling rate and only the sample is used to get the control statistic. The VSI A-MA chart produces an 'out-of-control' signal either when any control statistic falls outside the control limit or when L-consecutive control statistics fall outside the threshold limit. The control length L is introduced to prevent small mean shifts from being undetected for a long period. A Markov chain model is employed to investigate the VSI A-MA sampling process. Formulae related to the steady state average time-to signal (ATS) for an in-control state and out-of-control state are derived in closed forms. A statistical design procedure for the VSI A-MA chart is proposed. Comparative studies show that the proposed VSI A-MA chart is uniformly superior to the adaptive Cumulative sum (CUSUM) chart and to the Exponentially Weighted Moving Average (EWMA) chart, and is comparable to the variable sampling size (VSS) VSI EWMA chart with respect to the ATS performance.

A Selectively Cumulative Sum (S-CUSUM) Control Chart with Variable Sampling Intervals (VSI) (가변 샘플링 간격(VSI)을 갖는 선택적 누적합 (S-CUSUM) 관리도)

  • Im, Tae-Jin
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.11a
    • /
    • pp.560-570
    • /
    • 2006
  • This paper proposes a selectively cumulative sum (S-CUSUM) control chart with variable sampling intervals (VSI) for detecting shifts in the process mean. The basic idea of the VSI S-CUSUM chart is to adjust sampling intervals and to accumulate previous samples selectively in order to increase the sensitivity. The VSI S-CUSUM chart employs a threshold limit to determine whether to increase sampling rate as well as to accumulate previous samples or not. If a standardized control statistic falls outside the threshold limit, the next sample is taken with higher sampling rate and is accumulated to calculate the next control statistic. If the control statistic falls within the threshold limit, the next sample is taken with lower sampling rate and only the sample is used to get the control statistic. The VSI S-CUSUM chart produces an 'out-of-control' signal either when any control statistic falls outside the control limit or when L-consecutive control statistics fall outside the threshold limit. The number L is a decision variable and is called a 'control length'. A Markov chain model is employed to describe the VSI S-CUSUM sampling process. Some useful formulae related to the steady state average time-to signal (ATS) for an in-control state and out-of-control state are derived in closed forms. A statistical design procedure for the VSI S-CUSUM chart is proposed. Comparative studies show that the proposed VSI S-CUSUM chart is uniformly superior to the VSI CUSUM chart or to the Exponentially Weighted Moving Average (EWMA) chart with respect to the ATS performance.

  • PDF

A Reinforcement Learning Approach to Collaborative Filtering Considering Time-sequence of Ratings (평가의 시간 순서를 고려한 강화 학습 기반 협력적 여과)

  • Lee, Jung-Kyu;Oh, Byong-Hwa;Yang, Ji-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.31-36
    • /
    • 2012
  • In recent years, there has been increasing interest in recommender systems which provide users with personalized suggestions for products or services. In particular, researches of collaborative filtering analyzing relations between users and items has become more active because of the Netflix Prize competition. This paper presents the reinforcement learning approach for collaborative filtering. By applying reinforcement learning techniques to the movie rating, we discovered the connection between a time sequence of past ratings and current ratings. For this, we first formulated the collaborative filtering problem as a Markov Decision Process. And then we trained the learning model which reflects the connection between the time sequence of past ratings and current ratings using Q-learning. The experimental results indicate that there is a significant effect on current ratings by the time sequence of past ratings.

Enhancing the radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty quantification

  • Nguyen, Duc Hai;Kwon, Hyun-Han;Yoon, Seong-Sim;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.123-123
    • /
    • 2020
  • The present study is aimed to correcting radar-based mean areal precipitation forecasts to improve urban flood predictions and uncertainty analysis of water levels contributed at each stage in the process. For this reason, a long short-term memory (LSTM) network is used to reproduce three-hour mean areal precipitation (MAP) forecasts from the quantitative precipitation forecasts (QPFs) of the McGill Algorithm for Precipitation nowcasting by Lagrangian Extrapolation (MAPLE). The Gangnam urban catchment located in Seoul, South Korea, was selected as a case study for the purpose. A database was established based on 24 heavy rainfall events, 22 grid points from the MAPLE system and the observed MAP values estimated from five ground rain gauges of KMA Automatic Weather System. The corrected MAP forecasts were input into the developed coupled 1D/2D model to predict water levels and relevant inundation areas. The results indicate the viability of the proposed framework for generating three-hour MAP forecasts and urban flooding predictions. For the analysis uncertainty contributions of the source related to the process, the Bayesian Markov Chain Monte Carlo (MCMC) using delayed rejection and adaptive metropolis algorithm is applied. For this purpose, the uncertainty contributions of the stages such as QPE input, QPF MAP source LSTM-corrected source, and MAP input and the coupled model is discussed.

  • PDF

Traffic Characteristics and Adaptive model analysis in ATM Network (ATM망의 트래픽 특성과 적응모델 분석)

  • 김영진;김동일
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.2 no.4
    • /
    • pp.583-592
    • /
    • 1998
  • In this paper, the cell loss rate is analyzed in terms of the input traffic stream of different speed in ATM network. The cell loss rate is calculated by birth-death process of Leaky-Bucket mechanism as the representative algorithm of usage parameter control. The cell loss rate assumed 2-state MMPP input process to be birth-death process by considering the character of token pool about finite capacity queue. The results from numerical analysis show that the cell loss rate decreases abruptly according to the buffer size increase. The computer simulation by SIMSCRIPT II.5 has been done and compared with on/off input source case to verify the analysis results.

  • PDF

Study on predictive modeling of incidence of traffic accidents caused by weather conditions (날씨 변화에 따라 교통사고 예방을 위한 예측모델에 관한 연구)

  • Chung, Young-Suk;Park, Rack-Koo;Kim, Jin-Mook
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.1
    • /
    • pp.9-15
    • /
    • 2014
  • Traffic accidents are caused by a variety of factors. Among the factors that cause traffic accidents are weather conditions at the time. There is a difference in the percentage of deaths according to traffic accidents, due to the weather conditions. In order to reduce the number of deaths due to traffic accidents, to predict the incidence of traffic accidents that occur in response to weather conditions is required. In this paper, it propose a model to predict the incidence of traffic accidents caused by weather conditions. Predictive modeling was applied to the theory of Markov processes. By applying the actual data for the proposed model, to predict the incidence of traffic accidents, it was compared with the number of occurrences in practice. In this paper, it is to support the development of traffic accident policy with the change of weather.

Quorum Consensus Method based on Ghost using Simplified Metadata (단순화된 메타데이타를 이용한 고스트 기반 정족수 동의 기법의 개선)

  • Cho, Song-Yean;Kim, Tai-Yun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.1
    • /
    • pp.34-43
    • /
    • 2000
  • Replicated data that is used for fault tolerant distributed system requires replica control protocol to maintain data consistency. The one of replica control protocols is quorum consensus method which accesses replicated data by getting majority approval. If site failure or communication link failure occurs and any one can't get quorum consensus, it degrades the availability of data managed by quorum consensus protocol. So it needs for ghost to replace the failed site. Because ghost is not full replica but process which has state information using meta data, it is important to simplify meta data. In order to maintain availability and simplify meta data, we propose a method to use cohort set as ghost's meta data. The proposed method makes it possible to organize meta data in 2N+logN bits and to have higher availability than quorum consensus only with cohort set and dynamic linear voting protocol. Using Markov model we calculate proposed method's availability to analyze availability and compare it with existing protocols.

  • PDF

Two Statistical Models for Automatic Word Spacing of Korean Sentences (한글 문장의 자동 띄어쓰기를 위한 두 가지 통계적 모델)

  • 이도길;이상주;임희석;임해창
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.358-371
    • /
    • 2003
  • Automatic word spacing is a process of deciding correct boundaries between words in a sentence including spacing errors. It is very important to increase the readability and to communicate the accurate meaning of text to the reader. The previous statistical approaches for automatic word spacing do not consider the previous spacing state, and thus can not help estimating inaccurate probabilities. In this paper, we propose two statistical word spacing models which can solve the problem of the previous statistical approaches. The proposed models are based on the observation that the automatic word spacing is regarded as a classification problem such as the POS tagging. The models can consider broader context and estimate more accurate probabilities by generalizing hidden Markov models. We have experimented the proposed models under a wide range of experimental conditions in order to compare them with the current state of the art, and also provided detailed error analysis of our models. The experimental results show that the proposed models have a syllable-unit accuracy of 98.33% and Eojeol-unit precision of 93.06% by the evaluation method considering compound nouns.