• Title/Summary/Keyword: NHPP (Nonhomogeneous Poisson Process)

Search Result 29, Processing Time 0.022 seconds

The study for NHPP Software Reliability Model based on Kappa(2) distribution (Kappa(2) NHPP에 의한 소프트웨어 신뢰성 모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.5
    • /
    • pp.689-696
    • /
    • 2005
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, Goel-Okumoto and Yamada-Ohba-Osaki model was reviewed, proposes the Kappa(2) reliability model, which can capture the nomotonic decreasing nature of the failure occurrence rate per fault. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on sum of the squared errors and Kolmogorov distance, for the sake of efficient model, was employed. Analysis of failure using real data set, SYS2(Allen P.Nikora and Michael R.Lyu), for the sake of proposing two parameter of the Kappa distribution, was employed. This analysis of failure data compared with the Kappa model and the existing model using arithmetic and Laplace trend tests, bias tests is presented.

  • PDF

The NHPP Bayesian Software Reliability Model Using Latent Variables (잠재변수를 이용한 NHPP 베이지안 소프트웨어 신뢰성 모형에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Convergence Security Journal
    • /
    • v.6 no.3
    • /
    • pp.117-126
    • /
    • 2006
  • Bayesian inference and model selection method for software reliability growth models are studied. Software reliability growth models are used in testing stages of software development to model the error content and time intervals between software failures. In this paper, could avoid multiple integration using Gibbs sampling, which is a kind of Markov Chain Monte Carlo method to compute the posterior distribution. Bayesian inference for general order statistics models in software reliability with diffuse prior information and model selection method are studied. For model determination and selection, explored goodness of fit (the error sum of squares), trend tests. The methodology developed in this paper is exemplified with a software reliability random data set introduced by of Weibull distribution(shape 2 & scale 5) of Minitab (version 14) statistical package.

  • PDF

The Study for NHPP Software Reliability Growth Model based on Exponentiated Exponential Distribution (지수화 지수 분포에 의존한 NHPP 소프트웨어 신뢰성장 모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.9-18
    • /
    • 2006
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, Goel-Okumoto and Yamada-Ohba-Osaki model was reviewed, proposes the exponentiated exponential distribution reliability model, which maked out efficiency substituted for gamma and Weibull model(2 parameter shape illustrated by Gupta and Kundu(2001) Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE, AIC statistics and Kolmogorov distance, for the sake of efficient model, was employed. Analysis of failure using NTDS data set for the sake of proposing shape parameter of the exponentiated exponential distribution was employed. This analysis of failure data compared with the exponentiated exponential distribution model and the existing model (using arithmetic and Laplace trend tests, bias tests) is presented.

  • PDF

The Study for NHPP Software Reliability Model based on Chi-Square Distribution (카이제곱 NHPP에 의한 소프트웨어 신뢰성 모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.45-53
    • /
    • 2006
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, Goel-Okumoto and Yamada-Ohba-Osaki model was reviewed, proposes the $x^2$ reliability model, which can capture the increasing nature of the failure occurrence rate per fault. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE, AIC statistics and Kolmogorov distance, for the sake of efficient model, was employed. Analysis of failure using real data set, SYS2(Allen P.Nikora and Michael R.Lyu), for the sake of proposing shape parameter of the $x^2$ distribution using the degree of freedom, was employed. This analysis of failure data compared with the $x^2$ model and the existing model using arithmetic and Laplace trend tests, Kolmogorov test is presented.

  • PDF

Bayesian Value of Information Analysis with Linear, Exponential, Power Law Failure Models for Aging Chronic Diseases

  • Chang, Chi-Chang
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.2
    • /
    • pp.200-219
    • /
    • 2008
  • The effective management of uncertainty is one of the most fundamental problems in medical decision making. According to the literatures review, most medical decision models rely on point estimates for input parameters. However, it is natural that they should be interested in the relationship between changes in those values and subsequent changes in model output. Therefore, the purpose of this study is to identify the ranges of numerical values for which each option will be most efficient with respect to the input parameters. The Nonhomogeneous Poisson Process(NHPP) was used for describing the behavior of aging chronic diseases. Three kind of failure models (linear, exponential, and power law) were considered, and each of these failure models was studied under the assumptions of unknown scale factor and known aging rate, known scale factor and unknown aging rate, and unknown scale factor and unknown aging rate, respectively. In addition, this study illustrated developed method with an analysis of data from a trial of immunotherapy in the treatment of chronic Granulomatous disease. Finally, the proposed design of Bayesian value of information analysis facilitates the effective use of the computing capability of computers and provides a systematic way to integrate the expert's opinions and the sampling information which will furnish decision makers with valuable support for quality medical decision making.

A Study on the Optimal Release Time Decision of a Developed Software by using Logistic Testing Effort Function (로지스틱 테스트 노력함수를 이용한 소프트웨어의 최적인도시기 결정에 관한 연구)

  • Che, Gyu-Shik;Kim, Yong-Kyung
    • Journal of Information Technology Applications and Management
    • /
    • v.12 no.2
    • /
    • pp.1-13
    • /
    • 2005
  • This paper proposes a software-reliability growth model incoporating the amount of testing effort expended during the software testing phase after developing it. The time-dependent behavior of testing effort expenditures is described by a Logistic curve. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, a software-reliability growth model is formulated by a nonhomogeneous Poisson process. Using this model the method of data analysis for software reliability measurement is developed. After defining a software reliability, This paper discusses the relations between testing time and reliability and between duration following failure fixing and reliability are studied. SRGM in several literatures has used the exponential curve, Railleigh curve or Weibull curve as an amount of testing effort during software testing phase. However, it might not be appropriate to represent the consumption curve for testing effort by one of already proposed curves in some software development environments. Therefore, this paper shows that a logistic testing-effort function can be adequately expressed as a software development/testing effort curve and that it gives a good predictive capability based on real failure data.

  • PDF

A Study on the Optimum Release Model of a Developed Software with Weibull Testing Efforts (웨이블 시험노력을 이용한 개발 소프트웨어의 최적발행 모델에 관한 연구)

  • Choe, Gyu-Sik;Jang, Yun-Seung
    • The KIPS Transactions:PartD
    • /
    • v.8D no.6
    • /
    • pp.835-842
    • /
    • 2001
  • We propose a software-reliability growth model incoporating the amount of testing effort expended during the software testing phase. The time-dependent behavior of testing effort expenditures is described by a Weibull curve. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, a software-reliability growth model is formulated by a nonhomogeneous Poisson process. Using this model the method of data analysis for software reliability measurement is developed. After defining a software reliability, we discuss the relations between testing time and reliability and between duration following failure fixing and reliability are studied in this paper. The release time making the testing cost to be minimum is determined through studying the cost for each condition. Also, the release time is determined depending on the conditions of the specified reliability. The optimum release time is determined by simultaneously studying optimum release time issue that determines both the cost related time and the specified reliability related time.

  • PDF

Reasonability of Logistic Curve on S/W (로지스틱 곡선을 이용한 타당성)

  • Kim, Sun-Il;Che, Gyu-Shik;Jo, In-June
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.1
    • /
    • pp.1-9
    • /
    • 2008
  • The Logistic cone is studied as a most desirable for the software testing effort. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, a software-reliability growth model is formulated by a nonhomogeneous Poisson process. Using this model the method of data analysis for software reliability measurement is developed. After defining a software reliability, This paper discusses the relations between testing time and reliability and between duration following failure fixing and reliability are studied SRGM in several literatures has used the exponential curve, Railleigh curve or Weibull cure as an amount of testing effort during software testing phase. However, it might not be appropriate to represent the consumption curve for testing effort by one of already proposed curves in some software development environments. Therefore, this paper shows that a logistic testing- effort function can be adequately expressed as a software development/testing effort curve and that it gives a good predictive capability based on real failure data.

A Comparison Study between Uniform Testing Effort and Weibull Testing Effort during Software Development (소프트웨어 개발시 일정테스트노력과 웨이불 테스트 노력의 비교 연구)

  • 최규식;장원석;김종기
    • Journal of Information Technology Application
    • /
    • v.3 no.3
    • /
    • pp.91-106
    • /
    • 2001
  • We propose a software-reliability growth model incoporating the amount of uniform and Weibull testing efforts during the software testing phase in this paper. The time-dependent behavior of testing effort is described by uniform and Weibull curves. Assuming that the error detection rate to the amount of testing effort spent during the testing phase is proportional to the current error content, the model is formulated by a nonhomogeneous Poisson process. Using this model the method the data analysis for software reliability measurement is developed. The optimum release time is determined by considering how the initial reliability R($\chi$ 0) would be. The conditions are ($R\chi$ 0)>$R_{o}$ , $P_{o}$ >R($\chi$ 0)> $R_{o}$ $^{d}$ and R($\chi$ 0)<$R_{o}$ $^{d}$ for uniform testing efforts. deal case is $P_{o}$ >($R\chi$ 0)> $R_{o}$ $^{d}$ Likewise, it is ($R\chi$ 0)$\geq$$R_{o}$ , $R_{o}$ >($R\chi$ 0)>R(eqation omitted) and ($R\chi$ 0)<R(eqation omitted)for Weibull testing efforts. Ideal case is $R_{o}$ > R($\chi$ 0)> R(eqation omitted).

  • PDF