• Title/Summary/Keyword: Bayesian 모형

Search Result 398, Processing Time 0.028 seconds

Bayesian Analysis of Korean Alcohol Consumption Data Using a Zero-Inflated Ordered Probit Model (영 과잉 순서적 프로빗 모형을 이용한 한국인의 음주자료에 대한 베이지안 분석)

  • Oh, Man-Suk;Oh, Hyun-Tak;Park, Se-Mi
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.2
    • /
    • pp.363-376
    • /
    • 2012
  • Excessive zeroes are often observed in ordinal categorical response variables. An ordinary ordered Probit model is not appropriate for zero-inflated data especially when there are many different sources of generating 0 observations. In this paper, we apply a two-stage zero-inflated ordered Probit (ZIOP) model which incorporate the zero-flated nature of data, propose a Bayesian analysis of a ZIOP model, and apply the method to alcohol consumption data collected by the National Bureau of Statistics, Korea. In the first stage of a ZIOP model, a Probit model is introduced to divide the non-drinkers into genuine non-drinkers who do not participate in drinking due to personal beliefs or permanent health problems and potential drinkers who did not drink at the time of the survey but have the potential to become drinkers. In the second stage, an ordered probit model is applied to drinkers that consists of zero-consumption potential drinkers and positive consumption drinkers. The analysis results show that about 30% of non-drinkers are genuine non-drinkers and hence the Korean alcohol consumption data has the feature of zero-inflated data. A study on the marginal effect of each explanatory variable shows that certain explanatory variables have effects on the genuine non-drinkers and potential drinkers in opposite directions, which may not be detected by an ordered Probit model.

The Comparison of Parameter Estimation for Nonhomogeneous Poisson Process Software Reliability Model (NHPP 소프트웨어 신뢰도 모형에 대한 모수 추정 비교)

  • Kim, Hee-Cheul;Lee, Sang-Sik;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1269-1276
    • /
    • 2004
  • The Parameter Estimation for software existing reliability models, Goel-Okumoto, Yamada-Ohba-Osaki model was reviewed and Rayleigh model based on Rayleigh distribution was studied. In this paper, we discusses comparison of parameter estimation using maximum likelihood estimator and Bayesian estimation based on Gibbs sampling to analysis of the estimator' pattern. Model selection based on sum of the squared errors and Braun statistic, for the sake of efficient model, was employed. A numerical example was illustrated using real data. The current areas and models of Superposition, mixture for future development are also employed.

2005학년도 대학수학능력시험 '수리영역 가형'에 대한 문항분석

  • Lee, Gang-Seop;Kim, Jong-Gyu
    • Communications of Mathematical Education
    • /
    • v.19 no.1 s.21
    • /
    • pp.321-323
    • /
    • 2005
  • 본 연구에서는 2004년 11월에 시행된 '2005학년도 대학수학능력시험 수리영역 가형' 의 문항을 분석하였다. 즉, 2-모수 문항 반응 모형에 근거한 베이지안(Bayesian) 1.0을 이용하여 문항의 난이도 및 변별도를 측정하였으며 고전검사이론 프로그램임 테스트안(Testan) 1.0을 이용하여 문항의 신뢰도 및 오답지 매력도를 구하였다. 이 결과는, 학생들이 어느 단원을 어려워하고 어떤 내용을 이해하지 못 하는지 그 원인을 찾을 수 있으므로, 교수-학습의 기초 자료로 활용할 수 있을 것이다.

  • PDF

Bayesian Estimation of the Reliability and Failure Rate Functions for the Burr Type-? Failure Model (Burr 고장모형에서 신뢰도와 고장률의 베이지안 추정)

  • 이우동;강상길
    • Journal of Korean Society for Quality Management
    • /
    • v.25 no.4
    • /
    • pp.71-78
    • /
    • 1997
  • In this paper, we consider a hierarchical Bayes estimation of the parameter, the reliability and failure rate functions based on type-II censored samples from a Burr type-? failure time model. The Gibbs sampler a, pp.oach brings considerable conceptual and computational simplicity to the calculation of the posterior marginals and reliability. A numerical study is provided.

  • PDF

Bayesian Inference for Modified Jelinski-Moranda Model by using Gibbs Sampling (깁스 샘플링을 이용한 변형된 Jelinski-Moranda 모형에 대한 베이지안 추론)

  • 최기헌;주정애
    • Journal of Applied Reliability
    • /
    • v.1 no.2
    • /
    • pp.183-192
    • /
    • 2001
  • Jelinski-Moranda model and modified Jelinski-Moranda model in software reliability are studied and we consider maximum likelihood estimator and Bayes estimates of the number of faults and the fault-detection rate per fault. A gibbs sampling approach is employed to compute the Bayes estimates, future survival function is examined. Model selection based on prequential likelihood of the conditional predictive ordinates. A numerical example with simulated data set is given.

  • PDF

Bayesian analysis of finite mixture model with cluster-specific random effects (군집 특정 변량효과를 포함한 유한 혼합 모형의 베이지안 분석)

  • Lee, Hyejin;Kyung, Minjung
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.57-68
    • /
    • 2017
  • Clustering algorithms attempt to find a partition of a finite set of objects in to a potentially predetermined number of nonempty subsets. Gibbs sampling of a normal mixture of linear mixed regressions with a Dirichlet prior distribution calculates posterior probabilities when the number of clusters was known. Our approach provides simultaneous partitioning and parameter estimation with the computation of classification probabilities. A Monte Carlo study of curve estimation results showed that the model was useful for function estimation. Examples are given to show how these models perform on real data.

Online abnormal events detection with online support vector machine (온라인 서포트벡터기계를 이용한 온라인 비정상 사건 탐지)

  • Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.2
    • /
    • pp.197-206
    • /
    • 2011
  • The ability to detect online abnormal events in signals is essential in many real-world signal processing applications. In order to detect abnormal events, previously known algorithms require an explicit signal statistical model, and interpret abnormal events as statistical model abrupt changes. In general, maximum likelihood and Bayesian estimation theory to estimate well as detection methods have been used. However, the above-mentioned methods for robust and tractable model, it is not easy to estimate. More freedom to estimate how the model is needed. In this paper, we investigate a machine learning, descriptor-based approach that does not require a explicit descriptors statistical model, based on support vector machines are known to be robust statistical models and a sequential optimal algorithm online support vector machine is introduced.

Analysis on Korean Economy with an Estimated DSGE Model after 2000 (DSGE 모형 추정을 이용한 2000년 이후 한국의 거시경제 분석)

  • Kim, Tae Bong
    • KDI Journal of Economic Policy
    • /
    • v.36 no.2
    • /
    • pp.1-64
    • /
    • 2014
  • This paper attempts to search the driving forces of the Korean economy after 2000 by analyzing an estimated DSGE model and observing the degree of implementation regarding non-systematic parts of both the monetary and fiscal policy during the global financial crisis. Two types of trends, various cyclical factors and frictions are introduced in the model for an empirical analysis in which historical decompositions of key macro variables are quantitatively assessed after 2000. While the monetary policy during the global financial crisis have reacted systematically in accordance with the estimated Taylor rule relatively, the fiscal policy which was aggressively expansionary is not fully explained by the estimated fiscal rule but more by the large magnitude of non-systematic reaction.

  • PDF

Development of Pedestrian Fatality Model using Bayesian-Based Neural Network (베이지안 신경망을 이용한 보행자 사망확률모형 개발)

  • O, Cheol;Gang, Yeon-Su;Kim, Beom-Il
    • Journal of Korean Society of Transportation
    • /
    • v.24 no.2 s.88
    • /
    • pp.139-145
    • /
    • 2006
  • This paper develops pedestrian fatality models capable of producing the probability of pedestrian fatality in collision between vehicles and pedestrians. Probabilistic neural network (PNN) and binary logistic regression (BLR) ave employed in modeling pedestrian fatality pedestrian age, vehicle type, and collision speed obtained from reconstructing collected accidents are used as independent variables in fatality models. One of the nice features of this study is that an iterative sampling technique is used to construct various training and test datasets for the purpose of better performance comparison Statistical comparison considering the variation of model Performances is conducted. The results show that the PNN-based fatality model outperforms the BLR-based model. The models developed in this study that allow us to predict the pedestrian fatality would be useful tools for supporting the derivation of various safety Policies and technologies to enhance Pedestrian safety.

A spatial analysis of Neyman-Scott rectangular pulses model using an approximate likelihood function (근사적 우도함수를 이용한 Neyman-Scott 구형펄스모형의 공간구조 분석)

  • Lee, Jeongjin;Kim, Yongku
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1119-1131
    • /
    • 2016
  • The Neyman-Scott Rectangular Pulses Model (NSRPM) is mainly used to construct hourly rainfall series. This model uses a modest number of parameters to represent the rainfall processes and underlying physical phenomena, such as the arrival of storms or rain cells. In NSRPM, the method of moments has often been used because it is difficult to know the distribution of rainfall intensity. Recently, approximated likelihood function for NSRPM has been introduced. In this paper, we propose a hierarchical model for applying a spatial structure to the NSRPM parameters using the approximated likelihood function. The proposed method is applied to summer hourly precipitation data observed at 59 weather stations (Korea Meteorological Administration) from 1973 to 2011.