• Title/Summary/Keyword: Bayes Inference

Search Result 53, Processing Time 0.021 seconds

Bayes Inference for the Spatial Bilinear Time Series Model with Application to Epidemic Data

  • Lee, Sung-Duck;Kim, Duk-Ki
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.4
    • /
    • pp.641-650
    • /
    • 2012
  • Spatial time series data can be viewed as a set of time series simultaneously collected at a number of spatial locations. This paper studies Bayesian inferences in a spatial time bilinear model with a Gibbs sampling algorithm to overcome problems in the numerical analysis techniques of a spatial time series model. For illustration, the data set of mumps cases reported from the Korea Center for Disease Control and Prevention monthly over the years 2001~2009 are selected for analysis.

Inference for exponentiated Weibull distribution under constant stress partially accelerated life tests with multiple censored

  • Nassr, Said G.;Elharoun, Neema M.
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.2
    • /
    • pp.131-148
    • /
    • 2019
  • Constant stress partially accelerated life tests are studied according to exponentiated Weibull distribution. Grounded on multiple censoring, the maximum likelihood estimators are determined in connection with unknown distribution parameters and accelerated factor. The confidence intervals of the unknown parameters and acceleration factor are constructed for large sample size. However, it is not possible to obtain the Bayes estimates in plain form, so we apply a Markov chain Monte Carlo method to deal with this issue, which permits us to create a credible interval of the associated parameters. Finally, based on constant stress partially accelerated life tests scheme with exponentiated Weibull distribution under multiple censoring, the illustrative example and the simulation results are used to investigate the maximum likelihood, and Bayesian estimates of the unknown parameters.

Online condition assessment of high-speed trains based on Bayesian forecasting approach and time series analysis

  • Zhang, Lin-Hao;Wang, You-Wu;Ni, Yi-Qing;Lai, Siu-Kai
    • Smart Structures and Systems
    • /
    • v.21 no.5
    • /
    • pp.705-713
    • /
    • 2018
  • High-speed rail (HSR) has been in operation and development in many countries worldwide. The explosive growth of HSR has posed great challenges for operation safety and ride comfort. Among various technological demands on high-speed trains, vibration is an inevitable problem caused by rail/wheel imperfections, vehicle dynamics, and aerodynamic instability. Ride comfort is a key factor in evaluating the operational performance of high-speed trains. In this study, online monitoring data have been acquired from an in-service high-speed train for condition assessment. The measured dynamic response signals at the floor level of a train cabin are processed by the Sperling operator, in which the ride comfort index sequence is used to identify the train's operation condition. In addition, a novel technique that incorporates salient features of Bayesian inference and time series analysis is proposed for outlier detection and change detection. The Bayesian forecasting approach enables the prediction of conditional probabilities. By integrating the Bayesian forecasting approach with time series analysis, one-step forecasting probability density functions (PDFs) can be obtained before proceeding to the next observation. The change detection is conducted by comparing the current model and the alternative model (whose mean value is shifted by a prescribed offset) to determine which one can well fit the actual observation. When the comparison results indicate that the alternative model performs better, then a potential change is detected. If the current observation is a potential outlier or change, Bayes factor and cumulative Bayes factor are derived for further identification. A significant change, if identified, implies that there is a great alteration in the train operation performance due to defects. In this study, two illustrative cases are provided to demonstrate the performance of the proposed method for condition assessment of high-speed trains.

Automated Prioritization of Construction Project Requirements using Machine Learning and Fuzzy Logic System

  • Hassan, Fahad ul;Le, Tuyen;Le, Chau;Shrestha, K. Joseph
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.304-311
    • /
    • 2022
  • Construction inspection is a crucial stage that ensures that all contractual requirements of a construction project are verified. The construction inspection capabilities among state highway agencies have been greatly affected due to budget reduction. As a result, efficient inspection practices such as risk-based inspection are required to optimize the use of limited resources without compromising inspection quality. Automated prioritization of textual requirements according to their criticality would be extremely helpful since contractual requirements are typically presented in an unstructured natural language in voluminous text documents. The current study introduces a novel model for predicting the risk level of requirements using machine learning (ML) algorithms. The ML algorithms tested in this study included naïve Bayes, support vector machines, logistic regression, and random forest. The training data includes sequences of requirement texts which were labeled with risk levels (such as very low, low, medium, high, very high) using the fuzzy logic systems. The fuzzy model treats the three risk factors (severity, probability, detectability) as fuzzy input variables, and implements the fuzzy inference rules to determine the labels of requirements. The performance of the model was examined on labeled dataset created by fuzzy inference rules and three different membership functions. The developed requirement risk prediction model yielded a precision, recall, and f-score of 78.18%, 77.75%, and 75.82%, respectively. The proposed model is expected to provide construction inspectors with a means for the automated prioritization of voluminous requirements by their importance, thus help to maximize the effectiveness of inspection activities under resource constraints.

  • PDF

Learning Similarity with Probabilistic Latent Semantic Analysis for Image Retrieval

  • Li, Xiong;Lv, Qi;Huang, Wenting
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1424-1440
    • /
    • 2015
  • It is a challenging problem to search the intended images from a large number of candidates. Content based image retrieval (CBIR) is the most promising way to tackle this problem, where the most important topic is to measure the similarity of images so as to cover the variance of shape, color, pose, illumination etc. While previous works made significant progresses, their adaption ability to dataset is not fully explored. In this paper, we propose a similarity learning method on the basis of probabilistic generative model, i.e., probabilistic latent semantic analysis (PLSA). It first derives Fisher kernel, a function over the parameters and variables, based on PLSA. Then, the parameters are determined through simultaneously maximizing the log likelihood function of PLSA and the retrieval performance over the training dataset. The main advantages of this work are twofold: (1) deriving similarity measure based on PLSA which fully exploits the data distribution and Bayes inference; (2) learning model parameters by maximizing the fitting of model to data and the retrieval performance simultaneously. The proposed method (PLSA-FK) is empirically evaluated over three datasets, and the results exhibit promising performance.

Bayes Inference for the Spatial Time Series Model (공간시계열모형에 대한 베이즈 추론)

  • Lee, Sung-Duck;Kim, In-Kyu;Kim, Duk-Ki;Chung, Ae-Ran
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.1
    • /
    • pp.31-40
    • /
    • 2009
  • Spatial time series data can be viewed either as a set of time series collected simultaneously at a number of spatial locations. In this paper, We estimate the parameters of spatial time autoregressive moving average (SIARMA) process by method of Gibbs sampling. Finally, We apply this method to a set of U.S. Mumps data over a 12 states region.

Bayesian parameter estimation and prediction in NHPP software reliability growth model (NHPP소프트웨어 신뢰도 성장모형에서 베이지안 모수추정과 예측)

  • Chang, Inhong;Jung, Deokhwan;Lee, Seungwoo;Song, Kwangyoon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.755-762
    • /
    • 2013
  • In this paper we consider the NHPP software reliability model. And we deal with the maximum likelihood estimation and the Bayesian estimation with conjugate prior for parameter inference in the mean value function of Goel-Okumoto model (1979). The parameter estimates for the proposed model is presented by MLE and Bayes estimator in data set. We compare the predicted number of faults with the actual data set using the proposed mean value function.

A tutorial on generalizing the default Bayesian t-test via posterior sampling and encompassing priors

  • Faulkenberry, Thomas J.
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.2
    • /
    • pp.217-238
    • /
    • 2019
  • With the advent of so-called "default" Bayesian hypothesis tests, scientists in applied fields have gained access to a powerful and principled method for testing hypotheses. However, such default tests usually come with a compromise, requiring the analyst to accept a one-size-fits-all approach to hypothesis testing. Further, such tests may not have the flexibility to test problems the scientist really cares about. In this tutorial, I demonstrate a flexible approach to generalizing one specific default test (the JZS t-test) (Rouder et al., Psychonomic Bulletin & Review, 16, 225-237, 2009) that is becoming increasingly popular in the social and behavioral sciences. The approach uses two results, the Savage-Dickey density ratio (Dickey and Lientz, 1980) and the technique of encompassing priors (Klugkist et al., Statistica Neerlandica, 59, 57-69, 2005) in combination with MCMC sampling via an easy-to-use probabilistic modeling package for R called Greta. Through a comprehensive mathematical description of the techniques as well as illustrative examples, the reader is presented with a general, flexible workflow that can be extended to solve problems relevant to his or her own work.

Bayesian forecasting approach for structure response prediction and load effect separation of a revolving auditorium

  • Ma, Zhi;Yun, Chung-Bang;Shen, Yan-Bin;Yu, Feng;Wan, Hua-Ping;Luo, Yao-Zhi
    • Smart Structures and Systems
    • /
    • v.24 no.4
    • /
    • pp.507-524
    • /
    • 2019
  • A Bayesian dynamic linear model (BDLM) is presented for a data-driven analysis for response prediction and load effect separation of a revolving auditorium structure, where the main loads are self-weight and dead loads, temperature load, and audience load. Analyses are carried out based on the long-term monitoring data for static strains on several key members of the structure. Three improvements are introduced to the ordinary regression BDLM, which are a classificatory regression term to address the temporary audience load effect, improved inference for the variance of observation noise to be updated continuously, and component discount factors for effective load effect separation. The effects of those improvements are evaluated regarding the root mean square errors, standard deviations, and 95% confidence intervals of the predictions. Bayes factors are used for evaluating the probability distributions of the predictions, which are essential to structural condition assessments, such as outlier identification and reliability analysis. The performance of the present BDLM has been successfully verified based on the simulated data and the real data obtained from the structural health monitoring system installed on the revolving structure.

A study of Bayesian inference on auto insurance credibility application (자동차보험 신뢰도 적용에 대한 베이지안 추론 방식 연구)

  • Kim, Myung Joon;Kim, Yeong-Hwa
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.4
    • /
    • pp.689-699
    • /
    • 2013
  • This paper studies the partial credibility application method by assuming the empirical prior or noninformative prior informations in auto insurnace business where intensive rating segmentation is expanded because of premium competition. Expanding of rating factor segmetation brings the increase of pricing cells, as a result, the number of cells for partial credibility application will increase correspondingly. This study is trying to suggest more accurate estimation method by considering the Bayesian framework. By using empirically well-known or noninformative information, inducing the proper posterior distribution and applying the Bayes estimate which is minimizing the error loss into the credibility method, we will show the advantage of Bayesian inference by comparison with current approaches. The comparison is implemented with square root rule which is a widely accepted method in insurance business. The convergence level towarding to the true risk will be compared among various approaches. This study introduces the alternative way of redcuing the error to the auto insurance business fields in need of various methods because of more segmentations.