• Title/Summary/Keyword: Bayesian Probability Statistics

Search Result 109, Processing Time 0.019 seconds

Efficient random number generation from extreme tail areas of a t-distribution (t 분포의 극단 꼬리부분으로부터의 효율적인 난수생성)

  • 오만숙;김나영
    • The Korean Journal of Applied Statistics
    • /
    • v.9 no.1
    • /
    • pp.165-177
    • /
    • 1996
  • It is often needed to generate random numbers from truncated t-distributions to carry out Bayesian inferences, especially in Monte Carlo integration for estimation of posterior densities of constrained parameters. However, when the restricted area is an extreme tail area with a small probability most existing random generation methods are not efficient. In this paper, we propose an efficient acceptance-rejection method to generate random numbers from extreme tail areas of a t-distribution. Using some simulation results, we compare the proposed algorithm with other popular methods.

  • PDF

Estimating the Population Variability Distribution Using Dependent Estimates From Generic Sources (종속적 문헌 추정치를 이용한 모집단 변이 분포의 추정)

  • 임태진
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.20 no.3
    • /
    • pp.43-59
    • /
    • 1995
  • This paper presents a method for estimating the population variability distribution of the failure parameter (failure rate or failure probability) for each failure mode considered in PSA (Probabilistic Safety Assessment). We focus on the utilization of generic estimates from various industry compendia for the estimation. The estimates are complicated statistics of failure data from plants. When the failure data referred in two or more sources are overlapped, dependency occurs among the estimates provided by the sources. This type of problem is first addressed in this paper. We propose methods based on ML-II estimation in Bayesian framework and discuss the characteristics of the proposed estimators. The proposed methods are easy to apply in real field. Numerical examples are also provided.

  • PDF

Bayesian analysis of Korean income data using zero-inflated Tobit model (영과잉 토빗모형을 이용한 한국 소득분포 자료의 베이지안 분석)

  • Hwang, Jisu;Kim, Sei-Wan;Oh, Man-Suk
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.6
    • /
    • pp.917-929
    • /
    • 2017
  • Korean income data obtained from Korea Labor Panel Survey shows excessive zeros, which may not be properly explained by the Tobit model. In this paper, we analyze the data using a zero-inflated Tobit model to incorporate excessive zeros. A zero-inflated Tobit model consists of two stages. In the first stage, individuals with 0 income are divided into two groups: genuine zero group and random zero group. Individuals in the genuine zero group did not participate labor market since they have no intention to do so. Individuals in the random zero group participated labor market but their incomes are very low and truncated at 0. In the second stage, the Tobit model is assumed to a subset of data combining random zeros and positive observations. Regression models are employed in both stages to obtain the effect of explanatory variables on the participation of labor market and the income amount. Markov chain Monte Carlo methods are applied for the Bayesian analysis of the data. The proposed zero-inflated Tobit model outperforms the Tobit model in model fit and prediction of zero frequency. The analysis results show strong evidence that the probability of participating in the labor market increases with age, decreases with education, and women tend to have stronger intentions on participating in the labor market than men. There also exists moderate evidence that the probability of participating in the labor market decreases with socio-economic status and reserved wage. However, the amount of monthly wage increases with age and education, and it is larger for married than unmarried and for men than women.

Nonparametric Bayesian Statistical Models in Biomedical Research (생물/보건/의학 연구를 위한 비모수 베이지안 통계모형)

  • Noh, Heesang;Park, Jinsu;Sim, Gyuseok;Yu, Jae-Eun;Chung, Yeonseung
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.867-889
    • /
    • 2014
  • Nonparametric Bayesian (np Bayes) statistical models are popularly used in a variety of research areas because of their flexibility and computational convenience. This paper reviews the np Bayes models focusing on biomedical research applications. We review key probability models for np Bayes inference while illustrating how each of the models is used to answer different types of research questions using biomedical examples. The examples are chosen to highlight the problems that are challenging for standard parametric inference but can be solved using nonparametric inference. We discuss np Bayes inference in four topics: (1) density estimation, (2) clustering, (3) random effects distribution, and (4) regression.

Document Clustering Methods using Hierarchy of Document Contents (문서 내용의 계층화를 이용한 문서 비교 방법)

  • Hwang, Myung-Gwon;Bae, Yong-Geun;Kim, Pan-Koo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2335-2342
    • /
    • 2006
  • The current web is accumulating abundant information. In particular, text based documents are a type used very easily and frequently by human. So, numerous researches are progressed to retrieve the text documents using many methods, such as probability, statistics, vector similarity, Bayesian, and so on. These researches however, could not consider both subject and semantic of documents. So, to overcome the previous problems, we propose the document similarity method for semantic retrieval of document users want. This is the core method of document clustering. This method firstly, expresses a hierarchy semantically of document content ut gives the important hierarchy domain of document to weight. With this, we could measure the similarity between documents using both the domain weight and concepts coincidence in the domain hierarchies.

Application of the Weibull-Poisson long-term survival model

  • Vigas, Valdemiro Piedade;Mazucheli, Josmar;Louzada, Francisco
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.325-337
    • /
    • 2017
  • In this paper, we proposed a new long-term lifetime distribution with four parameters inserted in a risk competitive scenario with decreasing, increasing and unimodal hazard rate functions, namely the Weibull-Poisson long-term distribution. This new distribution arises from a scenario of competitive latent risk, in which the lifetime associated to the particular risk is not observable, and where only the minimum lifetime value among all risks is noticed in a long-term context. However, it can also be used in any other situation as long as it fits the data well. The Weibull-Poisson long-term distribution is presented as a particular case for the new exponential-Poisson long-term distribution and Weibull long-term distribution. The properties of the proposed distribution were discussed, including its probability density, survival and hazard functions and explicit algebraic formulas for its order statistics. Assuming censored data, we considered the maximum likelihood approach for parameter estimation. For different parameter settings, sample sizes, and censoring percentages various simulation studies were performed to study the mean square error of the maximum likelihood estimative, and compare the performance of the model proposed with the particular cases. The selection criteria Akaike information criterion, Bayesian information criterion, and likelihood ratio test were used for the model selection. The relevance of the approach was illustrated on two real datasets of where the new model was compared with its particular cases observing its potential and competitiveness.

A Korean Homonym Disambiguation Model Based on Statistics Using Weights (가중치를 이용한 통계 기반 한국어 동형이의어 분별 모델)

  • 김준수;최호섭;옥철영
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.11
    • /
    • pp.1112-1123
    • /
    • 2003
  • WSD(word sense disambiguation) is one of the most difficult problems in Korean information processing. The Bayesian model that used semantic information, extracted from definition corpus(1 million POS-tagged eojeol, Korean dictionary definitions), resulted in accuracy of 72.08% (nouns 78.12%, verbs 62.45%). This paper proposes the statistical WSD model using NPH(New Prior Probability of Homonym sense) and distance weights. We select 46 homonyms(30 nouns, 16 verbs) occurred high frequency in definition corpus, and then we experiment the model on 47,977 contexts from ‘21C Sejong Corpus’(3.5 million POS-tagged eojeol). The WSD model using NPH improves on accuracy to average 1.70% and the one using NPH and distance weights improves to 2.01%.

A Study on the Prediction of Power Consumption in the Air-Conditioning System by Using the Gaussian Process (정규 확률과정을 사용한 공조 시스템의 전력 소모량 예측에 관한 연구)

  • Lee, Chang-Yong;Song, Gensoo;Kim, Jinho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.64-72
    • /
    • 2016
  • In this paper, we utilize a Gaussian process to predict the power consumption in the air-conditioning system. As the power consumption in the air-conditioning system takes a form of a time-series and the prediction of the power consumption becomes very important from the perspective of the efficient energy management, it is worth to investigate the time-series model for the prediction of the power consumption. To this end, we apply the Gaussian process to predict the power consumption, in which the Gaussian process provides a prior probability to every possible function and higher probabilities are given to functions that are more likely consistent with the empirical data. We also discuss how to estimate the hyper-parameters, which are parameters in the covariance function of the Gaussian process model. We estimated the hyper-parameters with two different methods (marginal likelihood and leave-one-out cross validation) and obtained a model that pertinently describes the data and the results are more or less independent of the estimation method of hyper-parameters. We validated the prediction results by the error analysis of the mean relative error and the mean absolute error. The mean relative error analysis showed that about 3.4% of the predicted value came from the error, and the mean absolute error analysis confirmed that the error in within the standard deviation of the predicted value. We also adopt the non-parametric Wilcoxon's sign-rank test to assess the fitness of the proposed model and found that the null hypothesis of uniformity was accepted under the significance level of 5%. These results can be applied to a more elaborate control of the power consumption in the air-conditioning system.

Semantic Topic Selection Method of Document for Classification (문서분류를 위한 의미적 주제선정방법)

  • Ko, kwang-Sup;Kim, Pan-Koo;Lee, Chang-Hoon;Hwang, Myung-Gwon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.163-172
    • /
    • 2007
  • The web as global network includes text document, video, sound, etc and connects each distributed information using link Through development of web, it accumulates abundant information and the main is text based documents. Most of user use the web to retrieve information what they want. So, numerous researches have progressed to retrieve the text documents using the many methods, such as probability, statistics, vector similarity, Bayesian, and so on. These researches however, could not consider both the subject and the semantics of documents. As a result user have to find by their hand again. Especially, it is more hard to find the korean document because the researches of korean document classification is insufficient. So, to overcome the previous problems, we propose the korean document classification method for semantic retrieval. This method firstly, extracts TF value and RV value of concepts that is included in document, and maps into U-WIN that is korean vocabulary dictionary to select the topic of document. This method is possible to classify the document semantically and showed the efficiency through experiment.