• Title/Summary/Keyword: Gibbs Algorithm

Search Result 90, Processing Time 0.021 seconds

A Parametric Image Enhancement Technique for Contrast-Enhanced Ultrasonography (조영증강 의료 초음파 진단에서 파라미터 영상의 개선 기법)

  • Kim, Ho Joon;Gwak, Seong Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.231-236
    • /
    • 2014
  • The transit time of contrast agents and the parameters of time-intensity curves in ultrasonography are important factors to diagnose various diseases of a digestive organ. We have implemented an automatic parametric imaging method to overcome the difficulty of the diagnosis by naked eyes. However, the micro-bubble noise and the respiratory motions may degrade the reliability of the parameter images. In this paper, we introduce an optimization technique based on MRF(Markov Random Field) model to enhance the quality of the parameter images, and present an image tracking algorithm to compensate the image distortion by respiratory motions. A method to extract the respiration periods from the ultrasound image sequence has been developed. We have implemented the ROI(Region of Interest) tracking algorithm using the dynamic weights and a momentum factor based on these periods. An energy function is defined for the Gibbs sampler of the image enhancement method. Through the experiments using the data to diagnose liver lesions, we have shown that the proposed method improves the quality of the parametric images.

Introduction to the Indian Buffet Process: Theory and Applications (인도부페 프로세스의 소개: 이론과 응용)

  • Lee, Youngseon;Lee, Kyoungjae;Lee, Kwangmin;Lee, Jaeyong;Seo, Jinwook
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.251-267
    • /
    • 2015
  • The Indian Buffet Process is a stochastic process on equivalence classes of binary matrices having finite rows and infinite columns. The Indian Buffet Process can be imposed as the prior distribution on the binary matrix in an infinite feature model. We describe the derivation of the Indian buffet process from a finite feature model, and briefly explain the relation between the Indian buffet process and the beta process. Using a Gaussian linear model, we describe three algorithms: Gibbs sampling algorithm, Stick-breaking algorithm and variational method, with application for finding features in image data. We also illustrate the use of the Indian Buffet Process in various type of analysis such as dyadic data analysis, network data analysis and independent component analysis.

Bayesian Test of Quasi-Independence in a Sparse Two-Way Contingency Table

  • Kwak, Sang-Gyu;Kim, Dal-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.495-500
    • /
    • 2012
  • We consider a Bayesian test of independence in a two-way contingency table that has some zero cells. To do this, we take a three-stage hierarchical Bayesian model under each hypothesis. For prior, we use Dirichlet density to model the marginal cell and each cell probabilities. Our method does not require complicated computation such as a Metropolis-Hastings algorithm to draw samples from each posterior density of parameters. We draw samples using a Gibbs sampler with a grid method. For complicated posterior formulas, we apply the Monte-Carlo integration and the sampling important resampling algorithm. We compare the values of the Bayes factor with the results of a chi-square test and the likelihood ratio test.

Bayesian estimation for finite population proportions in multinomial data

  • Kwak, Sang-Gyu;Kim, Dal-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.3
    • /
    • pp.587-593
    • /
    • 2012
  • We study Bayesian estimates for finite population proportions in multinomial problems. To do this, we consider a three-stage hierarchical Bayesian model. For prior, we use Dirichlet density to model each cell probability in each cluster. Our method does not require complicated computation such as Metropolis-Hastings algorithm to draw samples from each density of parameters. We draw samples using Gibbs sampler with grid method. We apply this algorithm to a couple of simulation data under three scenarios and we estimate the finite population proportions using two kinds of approaches We compare results with the point estimates of finite population proportions and their standard deviations. Finally, we check the consistency of computation using differen samples drawn from distinct iterates.

A MAP Estimate of Optimal Data Association in Multi-Target Tracking (다중표적추적의 최적 데이터결합을 위한 MAP 추정기 개발)

  • 이양원
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.3
    • /
    • pp.210-217
    • /
    • 2003
  • We introduced a scheme for finding an optimal data association matrix that represents the relationships between the measurements and tracks in multi-target tracking (MIT). We considered the relationships between targets and measurements as Markov Random Field and assumed a priori of the associations as a Gibbs distribution. Based on these assumptions, it was possible to reduce the MAP estimate of the association matrix to the energy minimization problem. After then, we defined an energy function over the measurement space that may incorporate most of the important natural constraints. To find the minimizer of the energy function, we derived a new equation in closed form. By introducing Lagrange multiplier, we derived a compact equation for parameters updating. In this manner, a pair of equations that consist of tracking and parameters updating can track the targets adaptively in a very variable environments. For measurements and targets, this algorithm needs only multiplications for each radar scan. Through the experiments, we analyzed and compared this algorithm with other representative algorithm. The result shows that the proposed method is stable, robust, fast enough for real time computation, as well as more accurate than other method.

The NHPP Bayesian Software Reliability Model Using Latent Variables (잠재변수를 이용한 NHPP 베이지안 소프트웨어 신뢰성 모형에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Convergence Security Journal
    • /
    • v.6 no.3
    • /
    • pp.117-126
    • /
    • 2006
  • Bayesian inference and model selection method for software reliability growth models are studied. Software reliability growth models are used in testing stages of software development to model the error content and time intervals between software failures. In this paper, could avoid multiple integration using Gibbs sampling, which is a kind of Markov Chain Monte Carlo method to compute the posterior distribution. Bayesian inference for general order statistics models in software reliability with diffuse prior information and model selection method are studied. For model determination and selection, explored goodness of fit (the error sum of squares), trend tests. The methodology developed in this paper is exemplified with a software reliability random data set introduced by of Weibull distribution(shape 2 & scale 5) of Minitab (version 14) statistical package.

  • PDF

Quantitative Analysis of Bayesian SPECT Reconstruction : Effects of Using Higher-Order Gibbs Priors

  • S. J. Lee
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.2
    • /
    • pp.133-142
    • /
    • 1998
  • In Bayesian SPECT reconstruction, the incorporation of elaborate forms of priors can lead to improved quantitative performance in various statistical terms, such as bias and variance. In particular, the use of higher-order smoothing priors, such as the thin-plate prior, is known to exhibit improved bias behavior compared to the conventional smoothing priors such as the membrane prior. However, the bias advantage of the higher-order priors is effective only when the hyperparameters involved in the reconstruction algorithm are properly chosen. In this work, we further investigate the quantitative performance of the two representative smoothing priors-the thin plate and the membrane-by observing the behavior of the associated hyperparameters of the prior distributions. In our experiments we use Monte Carlo noise trials to calculate bias and variance of reconstruction estimates, and compare the performance of ML-EM estimates to that of regularized EM using both membrane and thin-plate priors, and also to that of filtered backprojection, where the membrane and thin plate models become simple apodizing filters of specified form. We finally show that the use of higher-order models yields excellent "robustness" in quantitative performance by demonstrating that the thin plate leads to very low bias error over a large range of hyperparameters, while keeping a reasonable variance. variance.

  • PDF

A Preliminary Study of Enhanced Predictability of Non-Parametric Geostatistical Simulation through History Matching Technique (히스토리매칭 기법을 이용한 비모수 지구통계 모사 예측성능 향상 예비연구)

  • Jeong, Jina;Paudyal, Pradeep;Park, Eungyu
    • Journal of Soil and Groundwater Environment
    • /
    • v.17 no.5
    • /
    • pp.56-67
    • /
    • 2012
  • In the present study, an enhanced subsurface prediction algorithm based on a non-parametric geostatistical model and a history matching technique through Gibbs sampler is developed and the iterative prediction improvement procedure is proposed. The developed model is applied to a simple two-dimensional synthetic case where domain is composed of three different hydrogeologic media with $500m{\times}40m$ scale. In the application, it is assumed that there are 4 independent pumping tests performed at different vertical interval and the history curves are acquired through numerical modeling. With two hypothetical borehole information and pumping test data, the proposed prediction model is applied iteratively and continuous improvements of the predictions with reduced uncertainties of the media distribution are observed. From the results and the qualitative/quantitative analysis, it is concluded that the proposed model is good for the subsurface prediction improvements where the history data is available as a supportive information. Once the proposed model be a matured technique, it is believed that the model can be applied to many groundwater, geothermal, gas and oil problems with conventional fluid flow simulators. However, the overall development is still in its preliminary step and further considerations needs to be incorporated to be a viable and practical prediction technique including multi-dimensional verifications, global optimization, etc. which have not been resolved in the present study.

Topic Modeling on Research Trends of Industry 4.0 Using Text Mining (텍스트 마이닝을 이용한 4차 산업 연구 동향 토픽 모델링)

  • Cho, Kyoung Won;Woo, Young Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.7
    • /
    • pp.764-770
    • /
    • 2019
  • In this research, text mining techniques were used to analyze the papers related to the "4th Industry". In order to analyze the papers, total of 685 papers were collected by searching with the keyword "4th industry" in Korea Journal Index(KCI) from 2016 to 2019. We used Python-based web scraping program to collect papers and use topic modeling techniques based on LDA algorithm implemented in R language for data analysis. As a result of perplexity analysis on the collected papers, nine topics were determined optimally and nine representative topics of the collected papers were extracted using the Gibbs sampling method. As a result, it was confirmed that artificial intelligence, big data, Internet of things(IoT), digital, network and so on have emerged as the major technologies, and it was confirmed that research has been conducted on the changes due to the major technologies in various fields related to the 4th industry such as industry, government, education field, and job.

Comparing MCMC algorithms for the horseshoe prior (Horseshoe 사전분포에 대한 MCMC 알고리듬 비교 연구)

  • Miru Ma;Mingi Kang;Kyoungjae Lee
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.103-118
    • /
    • 2024
  • The horseshoe prior is notably one of the most popular priors in sparse regression models, where only a small fraction of coefficients are nonzero. The parameter space of the horseshoe prior is much smaller than that of the spike and slab prior, so it enables us to efficiently explore the parameter space even in high-dimensions. However, on the other hand, the horseshoe prior has a high computational cost for each iteration in the Gibbs sampler. To overcome this issue, various MCMC algorithms for the horseshoe prior have been proposed to reduce the computational burden. Especially, Johndrow et al. (2020) recently proposes an approximate algorithm that can significantly improve the mixing and speed of the MCMC algorithm. In this paper, we compare (1) the traditional MCMC algorithm, (2) the approximate MCMC algorithm proposed by Johndrow et al. (2020) and (3) its variant in terms of computing times, estimation and variable selection performance. For the variable selection, we adopt the sequential clustering-based method suggested by Li and Pati (2017). Practical performances of the MCMC methods are demonstrated via numerical studies.