• Title/Summary/Keyword: Markov chain - Monte Carlo

Search Result 272, Processing Time 0.031 seconds

Comparison of Estimation Methods in NONMEM 7.2: Application to a Real Clinical Trial Dataset (실제 임상 데이터를 이용한 NONMEM 7.2에 도입된 추정법 비교 연구)

  • Yun, Hwi-Yeol;Chae, Jung-Woo;Kwon, Kwang-Il
    • Korean Journal of Clinical Pharmacy
    • /
    • v.23 no.2
    • /
    • pp.137-141
    • /
    • 2013
  • Purpose: This study compared the performance of new NONMEM estimation methods using a population analysis dataset collected from a clinical study that consisted of 40 individuals and 567 observations after a single oral dose of glimepiride. Method: The NONMEM 7.2 estimation methods tested were first-order conditional estimation with interaction (FOCEI), importance sampling (IMP), importance sampling assisted by mode a posteriori (IMPMAP), iterative two stage (ITS), stochastic approximation expectation-maximization (SAEM), and Markov chain Monte Carlo Bayesian (BAYES) using a two-compartment open model. Results: The parameters estimated by IMP, IMPMAP, ITS, SAEM, and BAYES were similar to those estimated using FOCEI, and the objective function value (OFV) for diagnosing the model criteria was significantly decreased in FOCEI, IMPMAP, SAEM, and BAYES in comparison with IMP. Parameter precision in terms of the estimated standard error was estimated precisely with FOCEI, IMP, IMPMAP, and BAYES. The run time for the model analysis was shortest with BAYES. Conclusion: In conclusion, the new estimation methods in NONMEM 7.2 performed similarly in terms of parameter estimation, but the results in terms of parameter precision and model run times using BAYES were most suitable for analyzing this dataset.

Estimation of the Mixture of Normals of Saving Rate Using Gibbs Algorithm (Gibbs알고리즘을 이용한 저축률의 정규분포혼합 추정)

  • Yoon, Jong-In
    • Journal of Digital Convergence
    • /
    • v.13 no.10
    • /
    • pp.219-224
    • /
    • 2015
  • This research estimates the Mixture of Normals of households saving rate in Korea. Our sample is MDSS, micro-data in 2014 and Gibbs algorithm is used to estimate the Mixture of Normals. Evidences say some results. First, Gibbs algorithm works very well in estimating the Mixture of Normals. Second, Saving rate data has at least two components, one with mean zero and the other with mean 29.4%. It might be that households would be separated into high saving group and low saving group. Third, analysis of Mixture of Normals cannot answer that question and we find that income level and age cannot explain our results.

Auxiliary domain method for solving multi-objective dynamic reliability problems for nonlinear structures

  • Katafygiotis, Lambros;Moan, Torgeir;Cheungt, Sai Hung
    • Structural Engineering and Mechanics
    • /
    • v.25 no.3
    • /
    • pp.347-363
    • /
    • 2007
  • A novel methodology, referred to as Auxiliary Domain Method (ADM), allowing for a very efficient solution of nonlinear reliability problems is presented. The target nonlinear failure domain is first populated by samples generated with the help of a Markov Chain. Based on these samples an auxiliary failure domain (AFD), corresponding to an auxiliary reliability problem, is introduced. The criteria for selecting the AFD are discussed. The emphasis in this paper is on the selection of the auxiliary linear failure domain in the case where the original nonlinear reliability problem involves multiple objectives rather than a single objective. Each reliability objective is assumed to correspond to a particular response quantity not exceeding a corresponding threshold. Once the AFD has been specified the method proceeds with a modified subset simulation procedure where the first step involves the direct simulation of samples in the AFD, rather than standard Monte Carlo simulation as required in standard subset simulation. While the method is applicable to general nonlinear reliability problems herein the focus is on the calculation of the probability of failure of nonlinear dynamical systems subjected to Gaussian random excitations. The method is demonstrated through such a numerical example involving two reliability objectives and a very large number of random variables. It is found that ADM is very efficient and offers drastic improvements over standard subset simulation, especially when one deals with low probability failure events.

Adaptive MCMC-Based Particle Filter for Real-Time Multi-Face Tracking on Mobile Platforms

  • Na, In Seop;Le, Ha;Kim, Soo Hyung
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.17-25
    • /
    • 2014
  • In this paper, we describe an adaptive Markov chain Monte Carlo-based particle filter that effectively addresses real-time multi-face tracking on mobile platforms. Because traditional approaches based on a particle filter require an enormous number of particles, the processing time is high. This is a serious issue, especially on low performance devices such as mobile phones. To resolve this problem, we developed a tracker that includes a more sophisticated likelihood model to reduce the number of particles and maintain the identity of the tracked faces. In our proposed tracker, the number of particles is adjusted during the sampling process using an adaptive sampling scheme. The adaptive sampling scheme is designed based on the average acceptance ratio of sampled particles of each face. Moreover, a likelihood model based on color information is combined with corner features to improve the accuracy of the sample measurement. The proposed tracker applied on various videos confirmed a significant decrease in processing time compared to traditional approaches.

Event date model: a robust Bayesian tool for chronology building

  • Philippe, Lanos;Anne, Philippe
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.2
    • /
    • pp.131-157
    • /
    • 2018
  • We propose a robust event date model to estimate the date of a target event by a combination of individual dates obtained from archaeological artifacts assumed to be contemporaneous. These dates are affected by errors of different types: laboratory and calibration curve errors, irreducible errors related to contaminations, and taphonomic disturbances, hence the possible presence of outliers. Modeling based on a hierarchical Bayesian statistical approach provides a simple way to automatically penalize outlying data without having to remove them from the dataset. Prior information on individual irreducible errors is introduced using a uniform shrinkage density with minimal assumptions about Bayesian parameters. We show that the event date model is more robust than models implemented in BCal or OxCal, although it generally yields less precise credibility intervals. The model is extended in the case of stratigraphic sequences that involve several events with temporal order constraints (relative dating), or with duration, hiatus constraints. Calculations are based on Markov chain Monte Carlo (MCMC) numerical techniques and can be performed using ChronoModel software which is freeware, open source and cross-platform. Features of the software are presented in Vibet et al. (ChronoModel v1.5 user's manual, 2016). We finally compare our prior on event dates implemented in the ChronoModel with the prior in BCal and OxCal which involves supplementary parameters defined as boundaries to phases or sequences.

Bayesian estimation of kinematic parameters of disk galaxies in large HI galaxy surveys

  • Oh, Se-Heon;Staveley-Smith, Lister
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.2
    • /
    • pp.62.2-62.2
    • /
    • 2016
  • We present a newly developed algorithm based on a Bayesian method for 2D tilted-ring analysis of disk galaxies which operates on velocity fields. Compared to the conventional ones based on a chi-squared minimisation procedure, this new Bayesian-based algorithm less suffers from local minima of the model parameters even with high multi-modality of their posterior distributions. Moreover, the Bayesian analysis implemented via Markov Chain Monte Carlo (MCMC) sampling only requires broad ranges of posterior distributions of the parameters, which makes the fitting procedure fully automated. This feature is essential for performing kinematic analysis of an unprecedented number of resolved galaxies from the upcoming Square Kilometre Array (SKA) pathfinders' galaxy surveys. A standalone code, the so-called '2D Bayesian Automated Tilted-ring fitter' (2DBAT) that implements the Bayesian fits of 2D tilted-ring models is developed for deriving rotation curves of galaxies that are at least marginally resolved (> 3 beams across the semi-major axis) and moderately inclined (20 < i < 70 degree). The main layout of 2DBAT and its performance test are discussed using sample galaxies from Australia Telescope Compact Array (ATCA) observations as well as artificial data cubes built based on representative rotation curves of intermediate-mass and massive spiral galaxies.

  • PDF

Bayesian smoothing under structural measurement error model with multiple covariates

  • Hwang, Jinseub;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.3
    • /
    • pp.709-720
    • /
    • 2017
  • In healthcare and medical research, many important variables have a measurement error such as body mass index and laboratory data. It is also not easy to collect samples of large size because of high cost and long time required to collect the target patient satisfied with inclusion and exclusion criteria. Beside, the demand for solving a complex scientific problem has highly increased so that a semiparametric regression approach could be of substantial value solving this problem. To address the issues of measurement error, small domain and a scientific complexity, we conduct a multivariable Bayesian smoothing under structural measurement error covariate in this article. Specifically we enhance our previous model by incorporating other useful auxiliary covariates free of measurement error. For the regression spline, we use a radial basis functions with fixed knots for the measurement error covariate. We organize a fully Bayesian approach to fit the model and estimate parameters using Markov chain Monte Carlo. Simulation results represent that the method performs well. We illustrate the results using a national survey data for application.

Phrase-based Topic and Sentiment Detection and Tracking Model using Incremental HDP

  • Chen, YongHeng;Lin, YaoJin;Zuo, WanLi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.5905-5926
    • /
    • 2017
  • Sentiments can profoundly affect individual behavior as well as decision-making. Confronted with the ever-increasing amount of review information available online, it is desirable to provide an effective sentiment model to both detect and organize the available information to improve understanding, and to present the information in a more constructive way for consumers. This study developed a unified phrase-based topic and sentiment detection model, combined with a tracking model using incremental hierarchical dirichlet allocation (PTSM_IHDP). This model was proposed to discover the evolutionary trend of topic-based sentiments from online reviews. PTSM_IHDP model firstly assumed that each review document has been composed by a series of independent phrases, which can be represented as both topic information and sentiment information. PTSM_IHDP model secondly depended on an improved time-dependency non-parametric Bayesian model, integrating incremental hierarchical dirichlet allocation, to estimate the optimal number of topics by incrementally building an up-to-date model. To evaluate the effectiveness of our model, we tested our model on a collected dataset, and compared the result with the predictions of traditional models. The results demonstrate the effectiveness and advantages of our model compared to several state-of-the-art methods.

A Bayesian Prediction of the Generalized Pareto Model (일반화 파레토 모형에서의 베이지안 예측)

  • Huh, Pan;Sohn, Joong Kweon
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.1069-1076
    • /
    • 2014
  • Rainfall weather patterns have changed due to global warming and sudden heavy rainfalls have become more frequent. Economic loss due to heavy rainfall has increased. We study the generalized Pareto distribution for modelling rainfall in Seoul based on data from 1973 to 2008. We use several priors including Jeffrey's noninformative prior and Gibbs sampling method to derive Bayesian posterior predictive distributions. The probability of heavy rainfall has increased over the last ten years based on estimated posterior predictive distribution.

Efficient Bayesian Inference on Asymmetric Jump-Diffusion Models (비대칭적 점프확산 모형의 효율적인 베이지안 추론)

  • Park, Taeyoung;Lee, Youngeun
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.959-973
    • /
    • 2014
  • Asset pricing models that account for asymmetric volatility in asset prices have been recently proposed. This article presents an efficient Bayesian method to analyze asset-pricing models. The method is developed by devising a partially collapsed Gibbs sampler that capitalizes on the functional incompatibility of conditional distributions without complicating the updates of model components. The proposed method is illustrated using simulated data and applied to daily S&P 500 data observed from September 1980 to August 2014.