• Title/Summary/Keyword: Gibbs Sampling Algorithm

Search Result 52, Processing Time 0.02 seconds

The Bayesian Analysis for Software Reliability Models Based on NHPP (비동질적 포아송과정을 사용한 소프트웨어 신뢰 성장모형에 대한 베이지안 신뢰성 분석에 관한 연구)

  • Lee, Sang-Sik;Kim, Hee-Cheul;Kim, Yong-Jae
    • The KIPS Transactions:PartD
    • /
    • v.10D no.5
    • /
    • pp.805-812
    • /
    • 2003
  • This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP) and performs Bayesian inference using prior information. The failure process is analyzed to develop a suitable mean value function for the NHPP; expressions are given for several performance measure. The parametric inferences of the model using Logarithmic Poisson model, Crow model and Rayleigh model is discussed. Bayesian computation and model selection using the sum of squared errors. The numerical results of this models are applied to real software failure data. Tools of parameter inference was used method of Gibbs sampling and Metropolis algorithm. The numerical example by T1 data (Musa) was illustrated.

The Comparison of Parameter Estimation for Nonhomogeneous Poisson Process Software Reliability Model (NHPP 소프트웨어 신뢰도 모형에 대한 모수 추정 비교)

  • Kim, Hee-Cheul;Lee, Sang-Sik;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1269-1276
    • /
    • 2004
  • The Parameter Estimation for software existing reliability models, Goel-Okumoto, Yamada-Ohba-Osaki model was reviewed and Rayleigh model based on Rayleigh distribution was studied. In this paper, we discusses comparison of parameter estimation using maximum likelihood estimator and Bayesian estimation based on Gibbs sampling to analysis of the estimator' pattern. Model selection based on sum of the squared errors and Braun statistic, for the sake of efficient model, was employed. A numerical example was illustrated using real data. The current areas and models of Superposition, mixture for future development are also employed.

Introduction to the Indian Buffet Process: Theory and Applications (인도부페 프로세스의 소개: 이론과 응용)

  • Lee, Youngseon;Lee, Kyoungjae;Lee, Kwangmin;Lee, Jaeyong;Seo, Jinwook
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.251-267
    • /
    • 2015
  • The Indian Buffet Process is a stochastic process on equivalence classes of binary matrices having finite rows and infinite columns. The Indian Buffet Process can be imposed as the prior distribution on the binary matrix in an infinite feature model. We describe the derivation of the Indian buffet process from a finite feature model, and briefly explain the relation between the Indian buffet process and the beta process. Using a Gaussian linear model, we describe three algorithms: Gibbs sampling algorithm, Stick-breaking algorithm and variational method, with application for finding features in image data. We also illustrate the use of the Indian Buffet Process in various type of analysis such as dyadic data analysis, network data analysis and independent component analysis.

Identifying differentially expressed genes using the Polya urn scheme

  • Saraiva, Erlandson Ferreira;Suzuki, Adriano Kamimura;Milan, Luis Aparecido
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.6
    • /
    • pp.627-640
    • /
    • 2017
  • A common interest in gene expression data analysis is to identify genes that present significant changes in expression levels among biological experimental conditions. In this paper, we develop a Bayesian approach to make a gene-by-gene comparison in the case with a control and more than one treatment experimental condition. The proposed approach is within a Bayesian framework with a Dirichlet process prior. The comparison procedure is based on a model selection procedure developed using the discreteness of the Dirichlet process and its representation via Polya urn scheme. The posterior probabilities for models considered are calculated using a Gibbs sampling algorithm. A numerical simulation study is conducted to understand and compare the performance of the proposed method in relation to usual methods based on analysis of variance (ANOVA) followed by a Tukey test. The comparison among methods is made in terms of a true positive rate and false discovery rate. We find that proposed method outperforms the other methods based on ANOVA followed by a Tukey test. We also apply the methodologies to a publicly available data set on Plasmodium falciparum protein.

Shadow Economy, Corruption and Economic Growth: An Analysis of BRICS Countries

  • NGUYEN, Diep Van;DUONG, My Tien Ha
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.4
    • /
    • pp.665-672
    • /
    • 2021
  • The paper examines the impact of shadow economy and corruption, along with public expenditure, trade openness, foreign direct investment (FDI), inflation, and tax revenue on the economic growth of the BRICS countries. Data were collected from the World Bank, Transparency International, and Heritage Foundation over the 1991-2017 period. The Bayesian linear regression method is used to examine whether shadow economy, corruption and other indicators affect the economic growth of countries studied. This paper applies the normal prior suggested by Lemoine (2019) while the posterior distribution is simulated using Monte Carlo Markov Chain (MCMC) technique through the Gibbs sampling algorithm. The results indicate that public expenditure and trade openness can enhance the BRICS countries' economic growth, with the positive impact probability of 75.69% and 67.11%, respectively. Also, FDI, inflation, and tax revenue positively affect this growth, though the probability of positive effect is ambiguous, ranging from 51.13% to 56.36%. Further, the research's major finding is that shadow economy and control of corruption have a positive effect on the economic growth of the BRICS countries. Nevertheless, the posterior probabilities of these two factors are 62.23% and 65.25%, respectively. This result suggests that their positive effect probability is not high.

Bayesian Procedure for the Multiple Change Point Analysis of Fraction Nonconforming (부적합률의 다중변화점분석을 위한 베이지안절차)

  • Kim, Kyung-Sook;Kim, Hee-Jeong;Park, Jeong-Soo;Son, Young-Sook
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 2006.04a
    • /
    • pp.319-324
    • /
    • 2006
  • In this paper, we propose Bayesian procedure for the multiple change points analysis in a sequence of fractions nonconforming. We first compute the Bayes factor for detecting the existence of no change, a single change or multiple changes. The Gibbs sampler with the Metropolis-Hastings subchain is run to estimate parameters of the change point model, once the number of change points is identified. Finally, we apply the results developed in this paper to both a real and simulated data.

  • PDF

Computing Methods for Generating Spatial Random Variable and Analyzing Bayesian Model (확률난수를 이용한 공간자료가 생성과 베이지안 분석)

  • 이윤동
    • The Korean Journal of Applied Statistics
    • /
    • v.14 no.2
    • /
    • pp.379-391
    • /
    • 2001
  • 본 연구에서는 관심거리가 되고 있는 마코프인쇄 몬테칼로(Markov Chain Monte Carlo, MCMC)방법에 근거한 공간 확률난수 (spatial random variate)생성법과 깁스표본추출법(Gibbs sampling)에 의한 베이지안 분석 방법에 대한 기술적 사항들에 관하여 검토하였다. 먼저 기본적인 확률난수 생성법과 관련된 사항을 살펴보고, 다음으로 조건부명시법(conditional specification)을 이용한 공간 확률난수 생성법을 예를 들어 살펴보기로한다. 다음으로는 이렇게 생성된 공간자료를 분석하기 위하여 깁스표본추출법을 이용한 베이지안 사후분포를 구하는 방법을 살펴보았다.

  • PDF

Variational Expectation-Maximization Algorithm in Posterior Distribution of a Latent Dirichlet Allocation Model for Research Topic Analysis

  • Kim, Jong Nam
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.883-890
    • /
    • 2020
  • In this paper, we propose a variational expectation-maximization algorithm that computes posterior probabilities from Latent Dirichlet Allocation (LDA) model. The algorithm approximates the intractable posterior distribution of a document term matrix generated from a corpus made up by 50 papers. It approximates the posterior by searching the local optima using lower bound of the true posterior distribution. Moreover, it maximizes the lower bound of the log-likelihood of the true posterior by minimizing the relative entropy of the prior and the posterior distribution known as KL-Divergence. The experimental results indicate that documents clustered to image classification and segmentation are correlated at 0.79 while those clustered to object detection and image segmentation are highly correlated at 0.96. The proposed variational inference algorithm performs efficiently and faster than Gibbs sampling at a computational time of 0.029s.

Bayesian logit models with auxiliary mixture sampling for analyzing diabetes diagnosis data (보조 혼합 샘플링을 이용한 베이지안 로지스틱 회귀모형 : 당뇨병 자료에 적용 및 분류에서의 성능 비교)

  • Rhee, Eun Hee;Hwang, Beom Seuk
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.131-146
    • /
    • 2022
  • Logit models are commonly used to predicting and classifying categorical response variables. Most Bayesian approaches to logit models are implemented based on the Metropolis-Hastings algorithm. However, the algorithm has disadvantages of slow convergence and difficulty in ensuring adequacy for the proposal distribution. Therefore, we use auxiliary mixture sampler proposed by Frühwirth-Schnatter and Frühwirth (2007) to estimate logit models. This method introduces two sequences of auxiliary latent variables to make logit models satisfy normality and linearity. As a result, the method leads that logit model can be easily implemented by Gibbs sampling. We applied the proposed method to diabetes data from the Community Health Survey (2020) of the Korea Disease Control and Prevention Agency and compared performance with Metropolis-Hastings algorithm. In addition, we showed that the logit model using auxiliary mixture sampling has a great classification performance comparable to that of the machine learning models.

Bayesian Test of Quasi-Independence in a Sparse Two-Way Contingency Table

  • Kwak, Sang-Gyu;Kim, Dal-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.495-500
    • /
    • 2012
  • We consider a Bayesian test of independence in a two-way contingency table that has some zero cells. To do this, we take a three-stage hierarchical Bayesian model under each hypothesis. For prior, we use Dirichlet density to model the marginal cell and each cell probabilities. Our method does not require complicated computation such as a Metropolis-Hastings algorithm to draw samples from each posterior density of parameters. We draw samples using a Gibbs sampler with a grid method. For complicated posterior formulas, we apply the Monte-Carlo integration and the sampling important resampling algorithm. We compare the values of the Bayes factor with the results of a chi-square test and the likelihood ratio test.