• Title/Summary/Keyword: Bayesian maximum likelihood algorithm

Search Result 36, Processing Time 0.027 seconds

The Bayesian Approach of Software Optimal Release Time Based on Log Poisson Execution Time Model (포아송 실행시간 모형에 의존한 소프트웨어 최적방출시기에 대한 베이지안 접근 방법에 대한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.7
    • /
    • pp.1-8
    • /
    • 2009
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. The optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement is generally accepted. The Bayesian parametric inference of model using log Poisson execution time employ tool of Markov chain(Gibbs sampling and Metropolis algorithm). In a numerical example by T1 data was illustrated. make out estimating software optimal release time from the maximum likelihood estimation and Bayesian parametric estimation.

RELIABILITY ANALYSIS FOR THE TWO-PARAMETER PARETO DISTRIBUTION UNDER RECORD VALUES

  • Wang, Liang;Shi, Yimin;Chang, Ping
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1435-1451
    • /
    • 2011
  • In this paper the estimation of the parameters as well as survival and hazard functions are presented for the two-parameter Pareto distribution by using Bayesian and non-Bayesian approaches under upper record values. Maximum likelihood estimation (MLE) and interval estimation are derived for the parameters. Bayes estimators of reliability performances are obtained under symmetric (Squared error) and asymmetric (Linex and general entropy (GE)) losses, when two parameters have discrete and continuous priors, respectively. Finally, two numerical examples with real data set and simulated data, are presented to illustrate the proposed method. An algorithm is introduced to generate records data, then a simulation study is performed and different estimates results are compared.

Bayesian Hierachical Model using Gibbs Sampler Method: Field Mice Example (깁스 표본 기법을 이용한 베이지안 계층적 모형: 야생쥐의 예)

  • Song, Jae-Kee;Lee, Gun-Hee;Ha, Il-Do
    • Journal of the Korean Data and Information Science Society
    • /
    • v.7 no.2
    • /
    • pp.247-256
    • /
    • 1996
  • In this paper, we applied bayesian hierarchical model to analyze the field mice example introduced by Demster et al.(1981). For this example, we use Gibbs sampler method to provide the posterior mean and compared it with LSE(Least Square Estimator) and MLR(Maximum Likelihood estimator with Random effect) via the EM algorithm.

  • PDF

Genotype-Calling System for Somatic Mutation Discovery in Cancer Genome Sequence (암 유전자 배열에서 체세포 돌연변이 발견을 위한 유전자형 조사 시스템)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.12
    • /
    • pp.3009-3015
    • /
    • 2013
  • Next-generation sequencing (NGS) has enabled whole genome and transcriptome single nucleotide variant (SNV) discovery in cancer and method of the most fundamental being determining an individual's genotype from multiple aligned short read sequences at a position. Bayesian algorithm estimate parameter using posterior genotype probabilities and other method, EM algorithm, estimate parameter using maximum likelihood estimate method in observed data. Here, we propose a novel genotype-calling system and compare and analyze the effect of sample size(S = 50, 100 and 500) on posterior estimate of sequencing error rate, somatic mutation status and genotype probability. The result is that estimate applying Bayesian algorithm even for 50 of small sample size approached real parameter than estimate applying EM algorithm in small sample more accurately.

Numerical Bayesian updating of prior distributions for concrete strength properties considering conformity control

  • Caspeele, Robby;Taerwe, Luc
    • Advances in concrete construction
    • /
    • v.1 no.1
    • /
    • pp.85-102
    • /
    • 2013
  • Prior concrete strength distributions can be updated by using direct information from test results as well as by taking into account indirect information due to conformity control. Due to the filtering effect of conformity control, the distribution of the material property in the accepted inspected lots will have lower fraction defectives in comparison to the distribution of the entire production (before or without inspection). A methodology is presented to quantify this influence in a Bayesian framework based on prior knowledge with respect to the hyperparameters of concrete strength distributions. An algorithm is presented in order to update prior distributions through numerical integration, taking into account the operating characteristic of the applied conformity criteria, calculated based on Monte Carlo simulations. Different examples are given to derive suitable hyperparameters for incoming strength distributions of concrete offered for conformity assessment, using updated available prior information, maximum-likelihood estimators or a bootstrap procedure. Furthermore, the updating procedure based on direct as well as indirect information obtained by conformity assessment is illustrated and used to quantify the filtering effect of conformity criteria on concrete strength distributions in case of a specific set of conformity criteria.

A Review on the Analysis of Life Data Based on Bayesian Method: 2000~2016 (베이지안 기법에 기반한 수명자료 분석에 관한 문헌 연구: 2000~2016)

  • Won, Dong-Yeon;Lim, Jun Hyoung;Sim, Hyun Su;Sung, Si-il;Lim, Heonsang;Kim, Yong Soo
    • Journal of Applied Reliability
    • /
    • v.17 no.3
    • /
    • pp.213-223
    • /
    • 2017
  • Purpose: The purpose of this study is to arrange the life data analysis literatures based on the Bayesian method quantitatively and provide it as tables. Methods: The Bayesian method produces a more accurate estimates of other traditional methods in a small sample size, and it requires specific algorithm and prior information. Based on these three characteristics of the Bayesian method, the criteria for classifying the literature were taken into account. Results: In many studies, there are comparisons of estimation methods for the Bayesian method and maximum likelihood estimation (MLE), and sample size was greater than 10 and not more than 25. In probability distributions, a variety of distributions were found in addition to the distributions of Weibull commonly used in life data analysis, and MCMC and Lindley's Approximation were used evenly. Finally, Gamma, Uniform, Jeffrey and extension of Jeffrey distributions were evenly used as prior information. Conclusion: To verify the characteristics of the Bayesian method which are more superior to other methods in a smaller sample size, studies in less than 10 samples should be carried out. Also, comparative study is required by various distributions, thereby providing guidelines necessary.

Statistical Model for Emotional Video Shot Characterization (비디오 셧의 감정 관련 특징에 대한 통계적 모델링)

  • 박현재;강행봉
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1200-1208
    • /
    • 2003
  • Affective computing plays an important role in intelligent Human Computer Interactions(HCI). To detect emotional events, it is desirable to construct a computing model for extracting emotion related features from video. In this paper, we propose a statistical model based on the probabilistic distribution of low level features in video shots. The proposed method extracts low level features from video shots and then from a GMM(Gaussian Mixture Model) for them to detect emotional shots. As low level features, we use color, camera motion and sequence of shot lengths. The features can be modeled as a GMM by using EM(Expectation Maximization) algorithm and the relations between time and emotions are estimated by MLE(Maximum Likelihood Estimation). Finally, the two statistical models are combined together using Bayesian framework to detect emotional events in video.

Nonignorable Nonresponse Imputation and Rotation Group Bias Estimation on the Rotation Sample Survey (무시할 수 없는 무응답을 가지고 있는 교체표본조사에서의 무응답 대체와 교체그룹 편향 추정)

  • Choi, Bo-Seung;Kim, Dae-Young;Kim, Kee-Whan;Park, You-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.3
    • /
    • pp.361-375
    • /
    • 2008
  • We propose proper methods to impute the item nonresponse in 4-8-4 rotation sample survey. We consider nonignorable nonresponse mechanism that can happen when survey deals with sensitive question (e.g. income, labor force). We utilize modeling imputation method based on Bayesian approach to avoid a boundary solution problem. We also estimate a interview time bias using imputed data and calculate cell expectation and marginal probability on fixed time after removing estimated bias. We compare the mean squared errors and bias between maximum likelihood method and Bayesian methods using simulation studies.

Model selection algorithm in Gaussian process regression for computer experiments

  • Lee, Youngsaeng;Park, Jeong-Soo
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.383-396
    • /
    • 2017
  • The model in our approach assumes that computer responses are a realization of a Gaussian processes superimposed on a regression model called a Gaussian process regression model (GPRM). Selecting a subset of variables or building a good reduced model in classical regression is an important process to identify variables influential to responses and for further analysis such as prediction or classification. One reason to select some variables in the prediction aspect is to prevent the over-fitting or under-fitting to data. The same reasoning and approach can be applicable to GPRM. However, only a few works on the variable selection in GPRM were done. In this paper, we propose a new algorithm to build a good prediction model among some GPRMs. It is a post-work of the algorithm that includes the Welch method suggested by previous researchers. The proposed algorithms select some non-zero regression coefficients (${\beta}^{\prime}s$) using forward and backward methods along with the Lasso guided approach. During this process, the fixed were covariance parameters (${\theta}^{\prime}s$) that were pre-selected by the Welch algorithm. We illustrated the superiority of our proposed models over the Welch method and non-selection models using four test functions and one real data example. Future extensions are also discussed.

Analysis of Missing Data Using an Empirical Bayesian Method (경험적 베이지안 방법을 이용한 결측자료 연구)

  • Yoon, Yong Hwa;Choi, Boseung
    • The Korean Journal of Applied Statistics
    • /
    • v.27 no.6
    • /
    • pp.1003-1016
    • /
    • 2014
  • Proper missing data imputation is an important procedure to obtain superior results for data analysis based on survey data. This paper deals with both a model based imputation method and model estimation method. We utilized a Bayesian method to solve a boundary solution problem in which we applied a maximum likelihood estimation method. We also deal with a missing mechanism model selection problem using forecasting results and a comparison between model accuracies. We utilized MWPE(modified within precinct error) (Bautista et al., 2007) to measure prediction correctness. We applied proposed ML and Bayesian methods to the Korean presidential election exit poll data of 2012. Based on the analysis, the results under the missing at random mechanism showed superior prediction results than under the missing not at random mechanism.