• 제목/요약/키워드: Markov chain Monte Carlo

검색결과 270건 처리시간 0.023초

혼합분포모형의 매개변수 추정방법 비교 (Comparison of Three Parameter Estimation Methods for Mixture Distributions)

  • 신주영;김수영;김태림;허준행
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2017년도 학술발표회
    • /
    • pp.45-45
    • /
    • 2017
  • 상이한 자연현상으로 발생된 자료들은 때때로 통계적으로 다른 특성을 가지는 경우가 있다. 이런 자료들은 다른 두 개 이상의 모집단에서 자료가 발생한 것으로 가정할 수 가 있다. 기존에 널리 사용되어온 분포형 모형의 경우 단일한 모집단으로부터 자료가 발생한다는 가정하에서 개발된 모형들로 위에서 언급한 자료들을 적절히 모의할 수 없다. 이런 상이한 모집단에서 발생된 자료를 모형화 하기 위해서 혼합분포모형(mixture distribution)이 개발되었다. 홍수나 가뭄 등과 같은 극치 사상의 경우 다양한 자연현상들로부터 발생하기에 혼합분포모형을 적용할 경우 보다 정확한 모의가 가능하다. 혼합분포모형은 두 개 이상의 비혼합분포모형들을 가중합하여 만들어진다. 혼합 분포모형의 형태로 인하여 기존의 분포형 모형의 매개변수 추정 모형으로 널리 사용되던 최우도법 (maximum likelihood method), 모멘트법(method of moment), 확률가중모멘트법 (probability weighted moment method) 등을 이용하여 혼합분포모형의 매개변수를 추정하는 것이 용이 하지 않다. 혼합분포모형의 매개변수 추정 방법으로는 Expectation-Maximization (EM) 알고리즘, Meta-Heuristic Maximum Likelihood (MHML) 방법, Markov Chain Monte Carlo (MCMC) 방법 등이 적용되고 있다. 현재까지 수자원 분야에서 사용되는 극치 자료를 혼합분포모형을 이용하여 모의할 때 매개변수 추정방법에 따른 특성에 대한 연구가 진행되지 않았다. 본 연구에서는 우리나라 연최대강우량 자료를 이용하여 혼합분포모형의 매개변수 추정방법 (EM 알고리즘, MHML 방법, MCMC 방법) 들의 특성들을 비교 분석하였다. 혼합분포모형으로는 Gumbel-Gumbel 혼합분포 모형을 적용하였다. 본 연구의 결과는 향후 혼합분포모형을 이용한 연구에 좋은 기초자료로 사용될 수 있을 것으로 판단된다.

  • PDF

Bayesian Spatial Modeling of Precipitation Data

  • Heo, Tae-Young;Park, Man-Sik
    • 응용통계연구
    • /
    • 제22권2호
    • /
    • pp.425-433
    • /
    • 2009
  • Spatial models suitable for describing the evolving random fields in climate and environmental systems have been developed by many researchers. In general, rainfall in South Korea is highly variable in intensity and amount across space. This study characterizes the monthly and regional variation of rainfall fields using the spatial modeling. The main objective of this research is spatial prediction with the Bayesian hierarchical modeling (kriging) in order to further our understanding of water resources over space. We use the Bayesian approach in order to estimate the parameters and produce more reliable prediction. The Bayesian kriging also provides a promising solution for analyzing and predicting rainfall data.

Sensitivity analysis in Bayesian nonignorable selection model for binary responses

  • Choi, Seong Mi;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제25권1호
    • /
    • pp.187-194
    • /
    • 2014
  • We consider a Bayesian nonignorable selection model to accommodate the selection bias. Markov chain Monte Carlo methods is known to be very useful to fit the nonignorable selection model. However, sensitivity to prior assumptions on parameters for selection mechanism is a potential problem. To quantify the sensitivity to prior assumption, the deviance information criterion and the conditional predictive ordinate are used to compare the goodness-of-fit under two different prior specifications. It turns out that the 'MLE' prior gives better fit than the 'uniform' prior in viewpoints of goodness-of-fit measures.

BAYESIAN ROBUST ANALYSIS FOR NON-NORMAL DATA BASED ON A PERTURBED-t MODEL

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • 제35권4호
    • /
    • pp.419-439
    • /
    • 2006
  • The article develops a new class of distributions by introducing a nonnegative perturbing function to $t_\nu$ distribution having location and scale parameters. The class is obtained by using transformations and conditioning. The class strictly includes $t_\nu$ and $skew-t_\nu$ distributions. It provides yet other models useful for selection modeling and robustness analysis. Analytic forms of the densities are obtained and distributional properties are studied. These developments are followed by an easy method for estimating the distribution by using Markov chain Monte Carlo. It is shown that the method is straightforward to specify distribution ally and to implement computationally, with output readily adopted for constructing required criterion. The method is illustrated by using a simulation study.

POSTERIOR COMPUTATION OF SURVIVAL MODEL WITH DISCRETE APPROXIMATION

  • Lee, Jae-Yong;Kwon, Yong-Chan
    • Journal of the Korean Statistical Society
    • /
    • 제36권2호
    • /
    • pp.321-333
    • /
    • 2007
  • In the proportional hazard model with the beta process prior, the posterior computation with the discrete approximation is considered. The time period of interest is partitioned by small intervals. On each partitioning interval, the likelihood is approximated by that of a binomial experiment and the beta process prior is by a beta distribution. Consequently, the posterior is approximated by that of many independent binomial model with beta priors. The analysis of the leukemia remission data is given as an example. It is illustrated that the length of the partitioning interval affects the posterior and one needs to be careful in choosing it.

Multivariable Bayesian curve-fitting under functional measurement error model

  • Hwang, Jinseub;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권6호
    • /
    • pp.1645-1651
    • /
    • 2016
  • A lot of data, particularly in the medical field, contain variables that have a measurement error such as blood pressure and body mass index. On the other hand, recently smoothing methods are often used to solve a complex scientific problem. In this paper, we study a Bayesian curve-fitting under functional measurement error model. Especially, we extend our previous model by incorporating covariates free of measurement error. In this paper, we consider penalized splines for non-linear pattern. We employ a hierarchical Bayesian framework based on Markov Chain Monte Carlo methodology for fitting the model and estimating parameters. For application we use the data from the fifth wave (2012) of the Korea National Health and Nutrition Examination Survey data, a national population-based data. To examine the convergence of MCMC sampling, potential scale reduction factors are used and we also confirm a model selection criteria to check the performance.

Bayesian Methods for Wavelet Series in Single-Index Models

  • Park, Chun-Gun;Vannucci, Marina;Hart, Jeffrey D.
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 한국데이터정보과학회 2005년도 춘계학술대회
    • /
    • pp.83-126
    • /
    • 2005
  • Single-index models have found applications in econometrics and biometrics, where multidimensional regression models are often encountered. Here we propose a nonparametric estimation approach that combines wavelet methods for non-equispaced designs with Bayesian models. We consider a wavelet series expansion of the unknown regression function and set prior distributions for the wavelet coefficients and the other model parameters. To ensure model identifiability, the direction parameter is represented via its polar coordinates. We employ ad hoc hierarchical mixture priors that perform shrinkage on wavelet coefficients and use Markov chain Monte Carlo methods for a posteriori inference. We investigate an independence-type Metropolis-Hastings algorithm to produce samples for the direction parameter. Our method leads to simultaneous estimates of the link function and of the index parameters. We present results on both simulated and real data, where we look at comparisons with other methods.

  • PDF

A BAYESIAN APPROACH FOR A DECOMPOSITION MODEL OF SOFTWARE RELIABILITY GROWTH USING A RECORD VALUE STATISTICS

  • Choi, Ki-Heon;Kim, Hee-Cheul
    • Journal of applied mathematics & informatics
    • /
    • 제8권1호
    • /
    • pp.243-252
    • /
    • 2001
  • The points of failure of a decomposition process are defined to be the union of the points of failure from two component point processes for software reliability systems. Because sampling from the likelihood function of the decomposition model is difficulty, Gibbs Sampler can be applied in a straightforward manner. A Markov Chain Monte Carlo method with data augmentation is developed to compute the features of the posterior distribution. For model determination, we explored the prequential conditional predictive ordinate criterion that selects the best model with the largest posterior likelihood among models using all possible subsets of the component intensity functions. A numerical example with a simulated data set is given.

GPU 를 활용한 스캔라인 블록 Gibbs 샘플링 기법의 가속 (Accelerating Scanline Block Gibbs Sampling Method using GPU)

  • ;김원식;;박인규
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2014년도 하계학술대회
    • /
    • pp.77-78
    • /
    • 2014
  • A new MCMC method for optimization is presented in this paper, which is called the scanline block Gibbs sampler. Due to its slow convergence speed, traditional Markov chain Monte Carlo (MCMC) is not widely used. In contrast to the conventional MCMC method, it is more convenient to parallelize the scanline block Gibbs sampler. Since The main part of the scanline block Gibbs sampler is to calculate message between each edge, in order to accelerate the calculation of messages passing in scanline sampler, it is parallelized in GPU. It is proved that the implementation on GPU is faster than on CPU based on the experiments on the OpenGM2 benchmark.

  • PDF

Bayesian Variable Selection in the Proportional Hazard Model with Application to DNA Microarray Data

  • Lee, Kyeon-Eun;Mallick, Bani K.
    • 한국생물정보학회:학술대회논문집
    • /
    • 한국생물정보시스템생물학회 2005년도 BIOINFO 2005
    • /
    • pp.357-360
    • /
    • 2005
  • In this paper we consider the well-known semiparametric proportional hazards (PH) models for survival analysis. These models are usually used with few covariates and many observations (subjects). But, for a typical setting of gene expression data from DNA microarray, we need to consider the case where the number of covariates p exceeds the number of samples n. For a given vector of response values which are times to event (death or censored times) and p gene expressions (covariates), we address the issue of how to reduce the dimension by selecting the significant genes. This approach enable us to estimate the survival curve when n < < p. In our approach, rather than fixing the number of selected genes, we will assign a prior distribution to this number. The approach creates additional flexibility by allowing the imposition of constraints, such as bounding the dimension via a prior, which in effect works as a penalty. To implement our methodology, we use a Markov Chain Monte Carlo (MCMC) method. We demonstrate the use of the methodology to diffuse large B-cell lymphoma (DLBCL) complementary DNA(cDNA) data.

  • PDF