• Title/Summary/Keyword: log-likelihood function

Search Result 94, Processing Time 0.023 seconds

Parameter Estimation of the Two-Parameter Exponential Distribution under Three Step-Stress Accelerated Life Test

  • Moon, Gyoung-Ae;Kim, In-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.4
    • /
    • pp.1375-1386
    • /
    • 2006
  • In life testing, the lifetimes of test units under the usual conditions are so long that life testing at usual conditions is impractical. Testing units are subjected to conditions of high stress to yield informations quickly. In this paper, the inferences of parameters on the three step-stress accelerated life testing are studied. The two-parameter exponential distribution with a failure rate function that a log-quadratic function of stress and the tempered failure rate model are considered. We obtain the maximum likelihood estimators of the model parameters and their confidence regions. A numerical example will be given to illustrate the proposed inferential procedures.

  • PDF

A Study for NHPP software Reliability Growth Model based on polynomial hazard function (다항 위험함수에 근거한 NHPP 소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee Cheul
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.4
    • /
    • pp.7-14
    • /
    • 2011
  • Infinite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rate per fault (hazard function). This infinite non-homogeneous Poisson process is model which reflects the possibility of introducing new faults when correcting or modifying the software. In this paper, polynomial hazard function have been proposed, which can efficiency application for software reliability. Algorithm for estimating the parameters used to maximum likelihood estimator and bisection method. Model selection based on mean square error and the coefficient of determination for the sake of efficient model were employed. In numerical example, log power time model of the existing model in this area and the polynomial hazard function model were compared using failure interval time. Because polynomial hazard function model is more efficient in terms of reliability, polynomial hazard function model as an alternative to the existing model also were able to confirm that can use in this area.

Phase Doppler Measurements and Probability Density Functions in Liquid Fuel Spray (연료분무의 위상도플러 측정과 확률밀도함수의 도출)

  • 구자예
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.18 no.4
    • /
    • pp.1039-1049
    • /
    • 1994
  • The intermitternt and transient fuel spray have been investigated from the simultaneous measurement of droplet sizes and velocities by using Phase/Doppler Particle Analyzer(PDPA). Measurement have been done on the spray axis and at the edge of the spray near nozzle at various gas-to-liquid density ratios(.rho./sub g//.rho./sub l/) that ranges from those found in free atmospheric jets to conditions typical of diesel engines. Probability density distributions of the droplet size and velocity were obtained from raw data and mathematical probability density functions which can fit the experimental distribations were extracted using the principle of maximum likelihood. In the near nozzle region on the spray axis, droplet sizes ranged from the lower limit of the measurement system to the order of nozzle diameter for all (.rho./sub g/ /.rho./sub l/) and droplet sizes tended to be small on the spray edge. At the edge of spray, average droplet velocity peaked during needle opening and needle closing. The rms intensity is greatly incresed as the radial distance from the nozzle is increased. The probability density function which can best fit the physical breakage process such as breakup of fuel drops is exponecially decreasing log-hypebolic function with 4 parameters.

The Comparative Study of NHPP Software Reliability Model Based on Log and Exponential Power Intensity Function (로그 및 지수파우어 강도함수를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Yang, Tae-Jin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.445-452
    • /
    • 2015
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, proposes the reliability model with log and power intensity function (log linear, log power and exponential power), which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, was employed. Analysis of failure, using real data set for the sake of proposing log and power intensity function, was employed. This analysis of failure data compared with log and power intensity function. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the log type model is also efficient in terms of reliability because it (the coefficient of determination is 70% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, software developers have to consider the growth model by prior knowledge of the software to identify failure modes which can be able to help.

Selection of Appropriate Probability Distribution Types for Ten Days Evaporation Data (순별증발량 자료의 적정 확률분포형 선정)

  • 김선주;박재흥;강상진
    • Proceedings of the Korean Society of Agricultural Engineers Conference
    • /
    • 1998.10a
    • /
    • pp.338-343
    • /
    • 1998
  • This study is to select appropriate probability distributions for ten days evaporation data for the purpose of representing statistical characteristics of real evaporation data in Korea. Nine probability distribution functions were assumed to be underlying distributions for ten days evaporation data of 20 stations with the duration of 20 years. The parameter of each probability distribution function were estimated by the maximum likelihood approach, and appropriate probability distributions were selected from the goodness of fit test. Log Pearson type III model was selected as an appropriate probability distribution for ten days evaporation data in Korea.

  • PDF

A Test Procedure for Right Censored Data under the Additive Model

  • Park, Hyo-Il;Hong, Seung-Man
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.2
    • /
    • pp.325-334
    • /
    • 2009
  • In this research, we propose a nonparametric test procedure for the right censored and grouped data under the additive hazards model. For deriving the test statistics, we use the likelihood principle. Then we illustrate proposed test with an example and compare the performance with other procedure by obtaining empirical powers. Finally we discuss some interesting features concerning the proposed test.

Estimation and variable selection in censored regression model with smoothly clipped absolute deviation penalty

  • Shim, Jooyong;Bae, Jongsig;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.6
    • /
    • pp.1653-1660
    • /
    • 2016
  • Smoothly clipped absolute deviation (SCAD) penalty is known to satisfy the desirable properties for penalty functions like as unbiasedness, sparsity and continuity. In this paper, we deal with the regression function estimation and variable selection based on SCAD penalized censored regression model. We use the local linear approximation and the iteratively reweighted least squares algorithm to solve SCAD penalized log likelihood function. The proposed method provides an efficient method for variable selection and regression function estimation. The generalized cross validation function is presented for the model selection. Applications of the proposed method are illustrated through the simulated and a real example.

Objective Bayesian inference based on upper record values from Rayleigh distribution

  • Seo, Jung In;Kim, Yongku
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.4
    • /
    • pp.411-430
    • /
    • 2018
  • The Bayesian approach is a suitable alternative in constructing appropriate models for observed record values because the number of these values is small. This paper provides an objective Bayesian analysis method for upper record values arising from the Rayleigh distribution. For the objective Bayesian analysis, the Fisher information matrix for unknown parameters is derived in terms of the second derivative of the log-likelihood function by using Leibniz's rule; subsequently, objective priors are provided, resulting in proper posterior distributions. We examine if these priors are the PMPs. In a simulation study, inference results under the provided priors are compared through Monte Carlo simulations. Through real data analysis, we reveal a limitation of the appropriate confidence interval based on the maximum likelihood estimator for the scale parameter and evaluate the models under the provided priors.

Optimal three step stress accelerated life tests under periodic inspection and type I censoring

  • Moon, Gyoung-Ae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.843-850
    • /
    • 2012
  • The inferences of data obtained from periodic inspection and type I censoring for the three step stress accelerated life test are studied in this paper. The failure rate function that a log-quadratic relation of stress and the tampered failure rate model are considered under the exponential distribution. The optimal stress change times which minimize the asymptotic variance of maximum likelihood estimators of parameters is determined and the maximum likelihood estimators of the model parameters are estimated. A numerical example will be given to illustrate the proposed inferential procedures.

The skew-t censored regression model: parameter estimation via an EM-type algorithm

  • Lachos, Victor H.;Bazan, Jorge L.;Castro, Luis M.;Park, Jiwon
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.3
    • /
    • pp.333-351
    • /
    • 2022
  • The skew-t distribution is an attractive family of asymmetrical heavy-tailed densities that includes the normal, skew-normal and Student's-t distributions as special cases. In this work, we propose an EM-type algorithm for computing the maximum likelihood estimates for skew-t linear regression models with censored response. In contrast with previous proposals, this algorithm uses analytical expressions at the E-step, as opposed to Monte Carlo simulations. These expressions rely on formulas for the mean and variance of a truncated skew-t distribution, and can be computed using the R library MomTrunc. The standard errors, the prediction of unobserved values of the response and the log-likelihood function are obtained as a by-product. The proposed methodology is illustrated through the analyses of simulated and a real data application on Letter-Name Fluency test in Peruvian students.