• Title/Summary/Keyword: Log-likelihood function

검색결과 95건 처리시간 0.029초

Parameter Estimation of the Two-Parameter Exponential Distribution under Three Step-Stress Accelerated Life Test

  • Moon, Gyoung-Ae;Kim, In-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권4호
    • /
    • pp.1375-1386
    • /
    • 2006
  • In life testing, the lifetimes of test units under the usual conditions are so long that life testing at usual conditions is impractical. Testing units are subjected to conditions of high stress to yield informations quickly. In this paper, the inferences of parameters on the three step-stress accelerated life testing are studied. The two-parameter exponential distribution with a failure rate function that a log-quadratic function of stress and the tempered failure rate model are considered. We obtain the maximum likelihood estimators of the model parameters and their confidence regions. A numerical example will be given to illustrate the proposed inferential procedures.

  • PDF

다항 위험함수에 근거한 NHPP 소프트웨어 신뢰성장모형에 관한 연구 (A Study for NHPP software Reliability Growth Model based on polynomial hazard function)

  • 김희철
    • 디지털산업정보학회논문지
    • /
    • 제7권4호
    • /
    • pp.7-14
    • /
    • 2011
  • Infinite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rate per fault (hazard function). This infinite non-homogeneous Poisson process is model which reflects the possibility of introducing new faults when correcting or modifying the software. In this paper, polynomial hazard function have been proposed, which can efficiency application for software reliability. Algorithm for estimating the parameters used to maximum likelihood estimator and bisection method. Model selection based on mean square error and the coefficient of determination for the sake of efficient model were employed. In numerical example, log power time model of the existing model in this area and the polynomial hazard function model were compared using failure interval time. Because polynomial hazard function model is more efficient in terms of reliability, polynomial hazard function model as an alternative to the existing model also were able to confirm that can use in this area.

연료분무의 위상도플러 측정과 확률밀도함수의 도출 (Phase Doppler Measurements and Probability Density Functions in Liquid Fuel Spray)

  • 구자예
    • 대한기계학회논문집
    • /
    • 제18권4호
    • /
    • pp.1039-1049
    • /
    • 1994
  • The intermitternt and transient fuel spray have been investigated from the simultaneous measurement of droplet sizes and velocities by using Phase/Doppler Particle Analyzer(PDPA). Measurement have been done on the spray axis and at the edge of the spray near nozzle at various gas-to-liquid density ratios(.rho./sub g//.rho./sub l/) that ranges from those found in free atmospheric jets to conditions typical of diesel engines. Probability density distributions of the droplet size and velocity were obtained from raw data and mathematical probability density functions which can fit the experimental distribations were extracted using the principle of maximum likelihood. In the near nozzle region on the spray axis, droplet sizes ranged from the lower limit of the measurement system to the order of nozzle diameter for all (.rho./sub g/ /.rho./sub l/) and droplet sizes tended to be small on the spray edge. At the edge of spray, average droplet velocity peaked during needle opening and needle closing. The rms intensity is greatly incresed as the radial distance from the nozzle is increased. The probability density function which can best fit the physical breakage process such as breakup of fuel drops is exponecially decreasing log-hypebolic function with 4 parameters.

로그 및 지수파우어 강도함수를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구 (The Comparative Study of NHPP Software Reliability Model Based on Log and Exponential Power Intensity Function)

  • 양태진
    • 한국정보전자통신기술학회논문지
    • /
    • 제8권6호
    • /
    • pp.445-452
    • /
    • 2015
  • 소프트웨어 개발 과정에서 소프트웨어 신뢰성은 매우 중요한 이슈이다. 소프트웨어 고장분석을 위한 무한고장 비동질적인 포아송과정에서 결함당 고장발생률이 상수이거나, 단조 증가 또는 단조 감소하는 패턴을 가질 수 있다. 본 논문에서는 소프트웨어 신뢰성에 대한 적용 효율을 나타내는 로그 및 지수파우어 강도함수(로그 선형, 로그 파우어와 지수 파우어)로 신뢰성 모형을 제안한다. 효율적인 모형을 위해 평균제곱에러(MSE), 결정계수($R^2$)에 근거한 모델선택, 최우추정법, 이분법에 사용된 파라미터를 평가하기 위한 알고리즘이 적용되였다. 제안하는 로그 및 지수파우어 강도함수를 위해 실제 데이터을 사용한 고장분석이 적용되였다. 고장데이터 분석은 로그 및 지수파우어 강도함수와 비교하였다. 데이터 신뢰성을 보장하기 위하여 라플라스 추세검정(Laplace trend test)을 사용하였다. 본 연구에 제안된 로그선형과 로그파우어 및 지수파우어 신뢰성모형도 신뢰성 측면에서 효율적이기 때문에 (결정계수가 70% 이상) 이 분야에서 기존 모형의 하나의 대안으로 사용할 수 있음을 확인 할 수 있었다. 이 연구를 통하여 소프트웨어 개발자들은 다양한 강도함수를 고려함으로서 소프트웨어 고장형태에 대한 사전지식을 파악하는데 도움을 줄 수 있으리라 사료 된다.

순별증발량 자료의 적정 확률분포형 선정 (Selection of Appropriate Probability Distribution Types for Ten Days Evaporation Data)

  • 김선주;박재흥;강상진
    • 한국농공학회:학술대회논문집
    • /
    • 한국농공학회 1998년도 학술발표회 발표논문집
    • /
    • pp.338-343
    • /
    • 1998
  • This study is to select appropriate probability distributions for ten days evaporation data for the purpose of representing statistical characteristics of real evaporation data in Korea. Nine probability distribution functions were assumed to be underlying distributions for ten days evaporation data of 20 stations with the duration of 20 years. The parameter of each probability distribution function were estimated by the maximum likelihood approach, and appropriate probability distributions were selected from the goodness of fit test. Log Pearson type III model was selected as an appropriate probability distribution for ten days evaporation data in Korea.

  • PDF

A Test Procedure for Right Censored Data under the Additive Model

  • Park, Hyo-Il;Hong, Seung-Man
    • Communications for Statistical Applications and Methods
    • /
    • 제16권2호
    • /
    • pp.325-334
    • /
    • 2009
  • In this research, we propose a nonparametric test procedure for the right censored and grouped data under the additive hazards model. For deriving the test statistics, we use the likelihood principle. Then we illustrate proposed test with an example and compare the performance with other procedure by obtaining empirical powers. Finally we discuss some interesting features concerning the proposed test.

Estimation and variable selection in censored regression model with smoothly clipped absolute deviation penalty

  • Shim, Jooyong;Bae, Jongsig;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권6호
    • /
    • pp.1653-1660
    • /
    • 2016
  • Smoothly clipped absolute deviation (SCAD) penalty is known to satisfy the desirable properties for penalty functions like as unbiasedness, sparsity and continuity. In this paper, we deal with the regression function estimation and variable selection based on SCAD penalized censored regression model. We use the local linear approximation and the iteratively reweighted least squares algorithm to solve SCAD penalized log likelihood function. The proposed method provides an efficient method for variable selection and regression function estimation. The generalized cross validation function is presented for the model selection. Applications of the proposed method are illustrated through the simulated and a real example.

Objective Bayesian inference based on upper record values from Rayleigh distribution

  • Seo, Jung In;Kim, Yongku
    • Communications for Statistical Applications and Methods
    • /
    • 제25권4호
    • /
    • pp.411-430
    • /
    • 2018
  • The Bayesian approach is a suitable alternative in constructing appropriate models for observed record values because the number of these values is small. This paper provides an objective Bayesian analysis method for upper record values arising from the Rayleigh distribution. For the objective Bayesian analysis, the Fisher information matrix for unknown parameters is derived in terms of the second derivative of the log-likelihood function by using Leibniz's rule; subsequently, objective priors are provided, resulting in proper posterior distributions. We examine if these priors are the PMPs. In a simulation study, inference results under the provided priors are compared through Monte Carlo simulations. Through real data analysis, we reveal a limitation of the appropriate confidence interval based on the maximum likelihood estimator for the scale parameter and evaluate the models under the provided priors.

Optimal three step stress accelerated life tests under periodic inspection and type I censoring

  • Moon, Gyoung-Ae
    • Journal of the Korean Data and Information Science Society
    • /
    • 제23권4호
    • /
    • pp.843-850
    • /
    • 2012
  • The inferences of data obtained from periodic inspection and type I censoring for the three step stress accelerated life test are studied in this paper. The failure rate function that a log-quadratic relation of stress and the tampered failure rate model are considered under the exponential distribution. The optimal stress change times which minimize the asymptotic variance of maximum likelihood estimators of parameters is determined and the maximum likelihood estimators of the model parameters are estimated. A numerical example will be given to illustrate the proposed inferential procedures.

The skew-t censored regression model: parameter estimation via an EM-type algorithm

  • Lachos, Victor H.;Bazan, Jorge L.;Castro, Luis M.;Park, Jiwon
    • Communications for Statistical Applications and Methods
    • /
    • 제29권3호
    • /
    • pp.333-351
    • /
    • 2022
  • The skew-t distribution is an attractive family of asymmetrical heavy-tailed densities that includes the normal, skew-normal and Student's-t distributions as special cases. In this work, we propose an EM-type algorithm for computing the maximum likelihood estimates for skew-t linear regression models with censored response. In contrast with previous proposals, this algorithm uses analytical expressions at the E-step, as opposed to Monte Carlo simulations. These expressions rely on formulas for the mean and variance of a truncated skew-t distribution, and can be computed using the R library MomTrunc. The standard errors, the prediction of unobserved values of the response and the log-likelihood function are obtained as a by-product. The proposed methodology is illustrated through the analyses of simulated and a real data application on Letter-Name Fluency test in Peruvian students.