• Title/Summary/Keyword: 2-포아송 분포모형

Search Result 33, Processing Time 0.024 seconds

The Study for ENHPP Software Reliability Growth Model Based on Kappa(2) Coverage Function (Kappa(2) 커버리지 함수를 이용한 ENHPP 소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.12
    • /
    • pp.2311-2318
    • /
    • 2007
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. Accurate predictions of software release times, and estimation of the reliability and availability of a software product require Release times of a critical element of the software testing process : test coverage. This model called Enhanced non-homogeneous Poission process(ENHPP). In this paper, exponential coverage and S-shaped model was reviewed, proposes the Kappa coverage model, which make out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE statistics and Kolmogorov distance, for the sake of efficient model, was employed. Numerical examples using real data set for the sake of proposing Kappa coverage model was employed. This analysis of failure data compared with the Kappaa coverage model and the existing model(using arithmetic and Laplace trend tests, bias tests) is presented.

Heat-Wave Data Analysis based on the Zero-Inflated Regression Models (영-과잉 회귀모형을 활용한 폭염자료분석)

  • Kim, Seong Tae;Park, Man Sik
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2829-2840
    • /
    • 2018
  • The random variable with an arbitrary value or more is called semi-continuous variable or zero-inflated one in case that its boundary value is more frequently observed than expected. This means the boundary value is likely to be practically observed more than it should be theoretically under certain probability distribution. When the distribution considered is continuous, the variable is defined as semi-continuous and when one of discrete distribution is assumed for the variable, we regard it as zero-inflated. In this study, we introduce the two-part model, which consists of one part for modelling the binary response and the other part for modelling the variable greater than the boundary value. Especially, the zero-inflated regression models are explained by using Poisson distribution and negative binomial distribution. In real data analysis, we employ the zero-inflated regression models to estimate the number of days under extreme heat-wave circumstances during the last 10 years in South Korea. Based on the estimation results, we create prediction maps for the estimated number of days under heat-wave advisory and heat-wave warning by using the universal kriging, which is one of the spatial prediction methods.

The Comparative Study of NHPP Software Reliability Model Based on Exponential and Inverse Exponential Distribution (지수 및 역지수 분포를 이용한 NHPP 소프트웨어 무한고장 신뢰도 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.133-140
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Software process improvement helps in finishing with reliable software product. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, we were proposed the reliability model with the exponential and inverse exponential distribution, which made out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on mean square error (MSE) and coefficient of determination($R^2$), for the sake of efficient model, were employed. Analysis of failure, using real data set for the sake of proposing the exponential and inverse exponential distribution, was employed. This analysis of failure data compared with the exponential and inverse exponential distribution property. In order to insurance for the reliability of data, Laplace trend test was employed. In this study, the inverse exponential distribution model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.

A Comparative Study on the Infinite NHPP Software Reliability Model Following Chi-Square Distribution with Lifetime Distribution Dependent on Degrees of Freedom (수명분포가 자유도에 의존한 카이제곱분포를 따르는 무한고장 NHPP 소프트웨어 신뢰성 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Kim, Jae-Wook
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.372-379
    • /
    • 2017
  • Software reliability factor during the software development process is elementary. Case of the infinite failure NHPP for identifying software failure, the occurrence rates per fault (hazard function) have the characteristic point that is constant, increases and decreases. In this paper, we propose a reliability model using the chi - square distribution which depends on the degree of freedom that represents the application efficiency of software reliability. Algorithm to estimate the parameters used to the maximum likelihood estimator and bisection method, a model selection based on the mean square error (MSE) and coefficient of determination($R^2$), for the sake of the efficient model, were employed. For the reliability model using the proposed degree of freedom of the chi - square distribution, the failure analysis using the actual failure interval data was applied. Fault data analysis is compared with the intensity function using the degree of freedom of the chi - square distribution. For the insurance about the reliability of a data, the Laplace trend test was employed. In this study, the chi-square distribution model depends on the degree of freedom, is also efficient about reliability because have the coefficient of determination is 90% or more, in the ground of the basic model, can used as a applied model. From this paper, the software development designer must be applied life distribution by the applied basic knowledge of the software to confirm failure modes which may be applied.

Spatial Analyses and Modeling of Landsacpe Dynamics (지표면 변화 탐색 및 예측 시스템을 위한 공간 모형)

  • 정명희;윤의중
    • Spatial Information Research
    • /
    • v.11 no.3
    • /
    • pp.227-240
    • /
    • 2003
  • The primary focus of this study is to provide a general methodology which can be utilized to understand and analyze environmental issues such as long term ecosystem dynamics and land use/cover change by development of 2D dynamic landscape models and model-based simulation. Change processes in land cover and ecosystem function can be understood in terms of the spatial and temporal distribution of land cover resources. In development of a system to understand major processes of change and obtain predictive information, first of all, spatial heterogeneity is to be taken into account because landscape spatial pattern affects on land cover change and interaction between different land cover types. Therefore, the relationship between pattern and processes is to be included in the research. Landscape modeling requires different approach depending on the definition, assumption, and rules employed for mechanism behind the processes such as spatial event process, land degradation, deforestration, desertification, and change in an urban environment. The rule-based models are described in the paper for land cover change by natural fires. Finally, a case study is presented as an example using spatial modeling and simulation to study and synthesize patterns and processes at different scales ranging from fine-scale to global scale.

  • PDF

A Comparative Study for NHPP Software Reliability Model based on the Shape Parameter of Flexible Weibull Extension Distribution (유연한 와이블 확장분포의 형상모수를 이용한 NHPP 소프트웨어 신뢰성 모형에 관한 비교연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.2
    • /
    • pp.141-147
    • /
    • 2016
  • NHPP software reliability models for failure analysis can have, in the literature, exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, infinite failures NHPP models that repairing software failure point in time reflects the situation, was presented for comparing property. Commonly used in the field of software reliability based on Flexible Weibull extension distribution software reliability of infinite failures was presented for comparison problem. The result is that a relatively small shaping parameter was effectively. The parameters estimation using maximum likelihood estimation was conducted and model selection was performed using the mean square error and the coefficient of determination.. In this research, software developers to identify software failure property follows shape parameter, some extent be able to help is considered.

Extreme Quantile Estimation of Losses in KRW/USD Exchange Rate (원/달러 환율 투자 손실률에 대한 극단분위수 추정)

  • Yun, Seok-Hoon
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.5
    • /
    • pp.803-812
    • /
    • 2009
  • The application of extreme value theory to financial data is a fairly recent innovation. The classical annual maximum method is to fit the generalized extreme value distribution to the annual maxima of a data series. An alterative modern method, the so-called threshold method, is to fit the generalized Pareto distribution to the excesses over a high threshold from the data series. A more substantial variant is to take the point-process viewpoint of high-level exceedances. That is, the exceedance times and excess values of a high threshold are viewed as a two-dimensional point process whose limiting form is a non-homogeneous Poisson process. In this paper, we apply the two-dimensional non-homogeneous Poisson process model to daily losses, daily negative log-returns, in the data series of KBW/USD exchange rate, collected from January 4th, 1982 until December 31 st, 2008. The main question is how to estimate extreme quantiles of losses such as the 10-year or 50-year return level.

Overdispersion in count data - a review (가산자료(count data)의 과산포 검색: 일반화 과정)

  • 김병수;오경주;박철용
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.2
    • /
    • pp.147-161
    • /
    • 1995
  • The primary objective of this paper is to review parametric models and test statistics related to overdspersion of count data. Poisson or binomial assumption often fails to explain overdispersion. We reviewed real examples of overdispersion in count data that occurred in toxicological or teratological experiments. We also reviewed several models that were suggested for implementing experiments. We also reviewed several models that were suggested for implementing the extra-binomial variation or hyper-Poisson variability, and we noted how these models were generalized and further developed. The approaches that have been suggested for the overdispersion fall into two broad categories. The one is to develop a parametric model for it, and the other is to assume a particular relationship between the variance and the mean of the response variable and to derive a score test staistics for detecting the overdispersion. Recently, Dean(1992) derived a general score test statistics for detecting overdispersion from the exponential family.

  • PDF

Identifying the Effects of Drivers' Behavior on Habitual Drunk Driving with Truncated Count Data Model (절단된 가산자료모형을 이용한 상습 음주운전자들의 습관적 음주운전 행태분석)

  • Yang, Si-Hun;Kim, Do-Gyeong
    • Journal of Korean Society of Transportation
    • /
    • v.29 no.5
    • /
    • pp.7-17
    • /
    • 2011
  • Traffic problems caused by drunk drivers have been steadily raised from the past. Even though the previous researches have focused on the development of countermeasures for preventing drunk driving, the number of drivers violating the DUI (Driving-Under-Influence) regulation is still increasing. Many studies seek countermeasures for preventing drunk driving by comparing the differences between general and drunk drivers. However, few researches have investigated focusing only on the characteristics of drunk drivers. It is well known that characteristics of general drivers are different from those of drunk drivers, and also habitual drunk drivers have different characteristics from non-habitual drunk drivers. Motivated by this fact, only the drivers who have violated DUI regulation are considered in the analysis. This study primarily aims to provide alternative solutions for reducing habitual drunk drivers who are highly inclined to do drunk driving repeatedly. For the analysis, various types of variables potentially effecting drunk driving behavior were investigated, and then truncated count data models were developed to analyze the effects of the variables selected on drunk driving. The results showed that 1) a truncated negative binomial model is better fitted to the data; and 2) five variables including experiential learning, the lack of self-control, self-reflection, the fear of crackdown, and the level of dependence on vehicles were found to be statistically significant.

Evaluation of extreme rainfall estimation obtained from NSRP model based on the objective function with statistical third moment (통계적 3차 모멘트 기반의 목적함수를 이용한 NSRP 모형의 극치강우 재현능력 평가)

  • Cho, Hemie;Kim, Yong-Tak;Yu, Jae-Ung;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.7
    • /
    • pp.545-556
    • /
    • 2022
  • It is recommended to use long-term hydrometeorological data for more than the service life of the hydraulic structures and water resource planning. For the purpose of expanding rainfall data, stochastic simulation models, such as Modified Bartlett-Lewis Rectangular Pulse (BLRP) and Neyman-Scott Rectangular Pulse (NSRP) models, have been widely used. The optimal parameters of the model can be estimated by repeatedly comparing the statistical moments defined through a combination of parameters of the probability distribution in the optimization context. However, parameter estimation using relatively small observed rainfall statistics corresponds to an ill-posed problem, leading to an increase in uncertainty in the parameter estimation process. In addition, as shown in previous studies, extreme values are underestimated because objective functions are typically defined by the first and second statistical moments (i.e., mean and variance). In this regard, this study estimated the parameters of the NSRP model using the objective function with the third moment and compared it with the existing approach based on the first and second moments in terms of estimation of extreme rainfall. It was found that the first and second moments did not show a significant difference depending on whether or not the skewness was considered in the objective function. However, the proposed model showed significantly improved performance in terms of estimation of design rainfalls.