• Title/Summary/Keyword: Gibbs Sampling method

Search Result 80, Processing Time 0.023 seconds

Bayesian Inference for Predicting the Default Rate Using the Power Prior

  • Kim, Seong-W.;Son, Young-Sook;Choi, Sang-A
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.685-699
    • /
    • 2006
  • Commercial banks and other related areas have developed internal models to better quantify their financial risks. Since an appropriate credit risk model plays a very important role in the risk management at financial institutions, it needs more accurate model which forecasts the credit losses, and statistical inference on that model is required. In this paper, we propose a new method for estimating a default rate. It is a Bayesian approach using the power prior which allows for incorporating of historical data to estimate the default rate. Inference on current data could be more reliable if there exist similar data based on previous studies. Ibrahim and Chen (2000) utilize these data to characterize the power prior. It allows for incorporating of historical data to estimate the parameters in the models. We demonstrate our methodologies with a real data set regarding SOHO data and also perform a simulation study.

Bayesian inference in finite population sampling under measurement error model

  • Goo, You Mee;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.6
    • /
    • pp.1241-1247
    • /
    • 2012
  • The paper considers empirical Bayes (EB) and hierarchical Bayes (HB) predictors of the finite population mean under a linear regression model with measurement errors We discuss how to calculate the mean squared prediction errors of the EB predictors using jackknife methods and the posterior standard deviations of the HB predictors based on the Markov Chain Monte Carlo methods. A simulation study is provided to illustrate the results of the preceding sections and compare the performances of the proposed procedures.

Methods and Techniques for Variance Component Estimation in Animal Breeding - Review -

  • Lee, C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.13 no.3
    • /
    • pp.413-422
    • /
    • 2000
  • In the class of models which include random effects, the variance component estimates are important to obtain accurate predictors and estimators. Variance component estimation is straightforward for balanced data but not for unbalanced data. Since orthogonality among factors is absent in unbalanced data, various methods for variance component estimation are available. REML estimation is the most widely used method in animal breeding because of its attractive statistical properties. Recently, Bayesian approach became feasible through Markov Chain Monte Carlo methods with increasingly powerful computers. Furthermore, advances in variance component estimation with complicated models such as generalized linear mixed models enabled animal breeders to analyze non-normal data.

Bayesian Inference for Censored Panel Regression Model

  • Lee, Seung-Chun;Choi, Byongsu
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.2
    • /
    • pp.193-200
    • /
    • 2014
  • It was recognized by some researchers that the disturbance variance in a censored regression model is frequently underestimated by the maximum likelihood method. This underestimation has implications for the estimation of marginal effects and asymptotic standard errors. For instance, the actual coverage probability of the confidence interval based on a maximum likelihood estimate can be significantly smaller than the nominal confidence level; consequently, a Bayesian estimation is considered to overcome this difficulty. The behaviors of the maximum likelihood and Bayesian estimators of disturbance variance are examined in a fixed effects panel regression model with a limited dependent variable, which is known to have the incidental parameter problem. Behavior under random effect assumption is also investigated.

A Bayesian Test for First Order Autocorrelation in Regression Errors : An Application to SPC Approach (회귀모형 오차항의 1차 자기상관에 대한 베이즈 검정법 : SPC 분야에의 응용)

  • Kim, Hea-Jung;Han, Sung-Sil
    • Journal of Korean Society for Quality Management
    • /
    • v.24 no.4
    • /
    • pp.190-206
    • /
    • 1996
  • In case measurements are made on units of production in time order, it is reasonable to expect that the measurement errors will sometimes be first order autocorrelated, and a technique to test such autocorrelation is required to give good control of the productive process. Tool-wear process provide an example for which regression model can sometimes be useful in modeling and controlling the process. For the control of such process, we present a simple method for testing first order autocorrelation in regression errors. The method is based on Bayesian test method via Bayes factor and derived by observing that in general, a Bayes factor can be written as the product of a quantity called the Savage-Dickey density ratio and a correction factor ; both terms are easily estimated from Gibbs sampling technique. Performance of the method is examined by means of Monte Carlo simulation. It is noted that the test not only achieves satisfactory power but eliminates the inconvenience occurred in using the well-known Durbin-Watson test.

  • PDF

Topic Extraction and Classification Method Based on Comment Sets

  • Tan, Xiaodong
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.329-342
    • /
    • 2020
  • In recent years, emotional text classification is one of the essential research contents in the field of natural language processing. It has been widely used in the sentiment analysis of commodities like hotels, and other commentary corpus. This paper proposes an improved W-LDA (weighted latent Dirichlet allocation) topic model to improve the shortcomings of traditional LDA topic models. In the process of the topic of word sampling and its word distribution expectation calculation of the Gibbs of the W-LDA topic model. An average weighted value is adopted to avoid topic-related words from being submerged by high-frequency words, to improve the distinction of the topic. It further integrates the highest classification of the algorithm of support vector machine based on the extracted high-quality document-topic distribution and topic-word vectors. Finally, an efficient integration method is constructed for the analysis and extraction of emotional words, topic distribution calculations, and sentiment classification. Through tests on real teaching evaluation data and test set of public comment set, the results show that the method proposed in the paper has distinct advantages compared with other two typical algorithms in terms of subject differentiation, classification precision, and F1-measure.

Topic Modeling on Research Trends of Industry 4.0 Using Text Mining (텍스트 마이닝을 이용한 4차 산업 연구 동향 토픽 모델링)

  • Cho, Kyoung Won;Woo, Young Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.7
    • /
    • pp.764-770
    • /
    • 2019
  • In this research, text mining techniques were used to analyze the papers related to the "4th Industry". In order to analyze the papers, total of 685 papers were collected by searching with the keyword "4th industry" in Korea Journal Index(KCI) from 2016 to 2019. We used Python-based web scraping program to collect papers and use topic modeling techniques based on LDA algorithm implemented in R language for data analysis. As a result of perplexity analysis on the collected papers, nine topics were determined optimally and nine representative topics of the collected papers were extracted using the Gibbs sampling method. As a result, it was confirmed that artificial intelligence, big data, Internet of things(IoT), digital, network and so on have emerged as the major technologies, and it was confirmed that research has been conducted on the changes due to the major technologies in various fields related to the 4th industry such as industry, government, education field, and job.

A Bayesian Prediction of the Generalized Pareto Model (일반화 파레토 모형에서의 베이지안 예측)

  • Huh, Pan;Sohn, Joong Kweon
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.1069-1076
    • /
    • 2014
  • Rainfall weather patterns have changed due to global warming and sudden heavy rainfalls have become more frequent. Economic loss due to heavy rainfall has increased. We study the generalized Pareto distribution for modelling rainfall in Seoul based on data from 1973 to 2008. We use several priors including Jeffrey's noninformative prior and Gibbs sampling method to derive Bayesian posterior predictive distributions. The probability of heavy rainfall has increased over the last ten years based on estimated posterior predictive distribution.

Bayesian Variable Selection in Linear Regression Models with Inequality Constraints on the Coefficients (제한조건이 있는 선형회귀 모형에서의 베이지안 변수선택)

  • 오만숙
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.1
    • /
    • pp.73-84
    • /
    • 2002
  • Linear regression models with inequality constraints on the coefficients are frequently used in economic models due to sign or order constraints on the coefficients. In this paper, we propose a Bayesian approach to selecting significant explanatory variables in linear regression models with inequality constraints on the coefficients. Bayesian variable selection requires computation of posterior probability of each candidate model. We propose a method which computes all the necessary posterior model probabilities simultaneously. In specific, we obtain posterior samples form the most general model via Gibbs sampling algorithm (Gelfand and Smith, 1990) and compute the posterior probabilities by using the samples. A real example is given to illustrate the method.

Bayesian Interval Estimation of Tobit Regression Model (토빗회귀모형에서 베이지안 구간추정)

  • Lee, Seung-Chun;Choi, Byung Su
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.5
    • /
    • pp.737-746
    • /
    • 2013
  • The Bayesian method can be applied successfully to the estimation of the censored regression model introduced by Tobin (1958). The Bayes estimates show improvements over the maximum likelihood estimate; however, the performance of the Bayesian interval estimation is questionable. In Bayesian paradigm, the prior distribution usually reflects personal beliefs about the parameters. Such subjective priors will typically yield interval estimators with poor frequentist properties; however, an objective noninformative often yields a Bayesian procedure with good frequentist properties. We examine the performance of frequentist properties of noninformative priors for the Tobit regression model.