• Title/Summary/Keyword: censored sample

Search Result 157, Processing Time 0.02 seconds

Model for Process Quality Assurance When the Fraction Nonconforming is Very Small (극소불량 공정보증을 위한 모형연구)

  • Jong-Gurl Kim
    • Proceedings of the Safety Management and Science Conference
    • /
    • 1999.11a
    • /
    • pp.247-257
    • /
    • 1999
  • There are several models for process quality assurance by quality system(ISO 9000), process capability analysis, acceptance control chart and so on. When a high level process capability has been achieved, it takes a long time to monitor the process shift, so it is sometimes necessary to develop a quicker monitoring system. To achieve a quicker quality assurance model for high-reliability process, this paper presents a model for process quality assurance when the fraction nonconforming is very small. We design an acceptance control chart based on variable quality characteristic and time-censored accelerated testing. The distribution of the characteristics is assumed to be normal of lognormal with a location parameter of the distribution that is a linear function of a stress. The design parameters are sample size, control limits and sample proportions allocated to low stress. These parameters are obtained under minimization of the relative variance of the MLE of location parameter subject to APL and RPL constraints.

  • PDF

The wage determinants applying sample selection bias (표본선택 편의를 반영한 임금결정요인 분석)

  • Park, Sungik;Cho, Jangsik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.5
    • /
    • pp.1317-1325
    • /
    • 2016
  • The purpose of this paper is to explain the factors affecting the wage of the vocational high school graduates. We particularly examine the effectiveness of controlling sample selection bias by employing the Tobit model and Heckman sample selection model. The major results are as follows. First it is shown that the Tobit model and Heckman sample selection model controlling sample selection bias is statistically significant. Hence all the independent variables seem to be statistically consistent with the theoretical model. Second, gender was statistically significant, both in the probability of employment and the wage. Third, the employment probability and wage of Maester high school graduates were shown to be high compared to all other graduates. Fourth, the higher parent's income, the higher are both the employment probability and the wage. Finally, parents education level, high school grade, satisfaction, and a number of licenses were found to be statistically significant, both in the probability of employment and wages.

The wage determinants of college graduates using Heckman's sample selection model (Heckman의 표본선택모형을 이용한 대졸자의 임금결정요인 분석)

  • Cho, Jangsik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.5
    • /
    • pp.1099-1107
    • /
    • 2017
  • In this study, we analyzed the determinants of wages of college graduates by using the data of "2014 Graduates Occupational Mobility Survey" conducted by Korea Employment Information Service. In general, wages contain two complex pieces of information about whether an individual is employed and the size of the wage. However, in many previous researches on wage determinants, sample selection bias tends to be generated by performing linear regression analysis using only information on wage size. We used the Heckman sample selection models for analysis to overcome this problem. The main results are summarized as follows. First, the validity of the Heckman's sample selection model is statistically significant. Male is significantly higher in both job probability and wage than female. As age increases and parents' income increases, both the probability of employment and the size of wages are higher. Finally, as the university satisfaction increases and the number of certifications acquired increased, both the probability of employment and the wage tends to increase.

Two-sample chi-square test for randomly censored data (임의로 관측중단된 두 표본 자료에 대한 카이제곱 검정방법)

  • 김주한;김정란
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.2
    • /
    • pp.109-119
    • /
    • 1995
  • A two sample chi-square test is introduced for testing the equality of the distributions of two populations when observations are subject to random censorship. The statistic is appropriate in testing problems where a two-sided alternative is of interest. Under the null hypothesis, the asymptotic distribution of the statistic is a chi-square distribution. We obtain two types of chi-square statistics ; one as a nonnegative definite quadratic form in difference of observed cell probabilities based on the product-limit estimators, the other one as a summation form. Data pertaining to a cancer chemotheray experiment are examined with these statistics.

  • PDF

Optimum failure-censored step-stress partially accelerated life test for the truncated logistic life distribution

  • Srivastava, P.W.;Mittal, N.
    • International Journal of Reliability and Applications
    • /
    • v.13 no.1
    • /
    • pp.19-35
    • /
    • 2012
  • This paper presents an optimum design of step-stress partially accelerated life test (PALT) plan which allows the test condition to be changed from use to accelerated condition on the occurrence of fixed number of failures. Various life distribution models such as exponential, Weibull, log-logistic, Burr type-Xii, etc have been used in the literature to analyze the PALT data. The need of different life distribution models is necessitated as in the presence of a limited source of data as typically occurs with modern devices having high reliability, the use of correct life distribution model helps in preventing the choice of unnecessary and expensive planned replacements. Truncated distributions arise when sample selection is not possible in some sub-region of sample space. In this paper it is assumed that the lifetimes of the items follow Truncated Logistic distribution truncated at point zero since time to failure of an item cannot be negative. Optimum step-stress PALT plan that finds the optimal proportion of units failed at normal use condition is determined by using the D-optimality criterion. The method developed has been explained using a numerical example. Sensitivity analysis and comparative study have also been carried out.

  • PDF

Parametric survival model based on the Lévy distribution

  • Valencia-Orozco, Andrea;Tovar-Cuevas, Jose R.
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.5
    • /
    • pp.445-461
    • /
    • 2019
  • It is possible that data are not always fitted with sufficient precision by the existing distributions; therefore this article presents a methodology that enables the use of families of asymmetric distributions as alternative probabilistic models for survival analysis, with censorship on the right, different from those usually studied (the Exponential, Gamma, Weibull, and Lognormal distributions). We use a more flexible parametric model in terms of density behavior, assuming that data can be fit by a distribution of stable distribution families considered unconventional in the analyses of survival data that are appropriate when extreme values occur, with small probabilities that should not be ignored. In the methodology, the determination of the analytical expression of the risk function h(t) of the $L{\acute{e}}vy$ distribution is included, as it is not usually reported in the literature. A simulation was conducted to evaluate the performance of the candidate distribution when modeling survival times, including the estimation of parameters via the maximum likelihood method, survival function ${\hat{S}}$(t) and Kaplan-Meier estimator. The obtained estimates did not exhibit significant changes for different sample sizes and censorship fractions in the sample. To illustrate the usefulness of the proposed methodology, an application with real data, regarding the survival times of patients with colon cancer, was considered.

An Application of Dirichlet Mixture Model for Failure Time Density Estimation to Components of Naval Combat System (디리슈레 혼합모형을 이용한 함정 전투체계 부품의 고장시간 분포 추정)

  • Lee, Jinwhan;Kim, Jung Hun;Jung, BongJoo;Kim, Kyeongtaek
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.42 no.4
    • /
    • pp.194-202
    • /
    • 2019
  • Reliability analysis of the components frequently starts with the data that manufacturer provides. If enough failure data are collected from the field operations, the reliability should be recomputed and updated on the basis of the field failure data. However, when the failure time record for a component contains only a few observations, all statistical methodologies are limited. In this case, where the failure records for multiple number of identical components are available, a valid alternative is combining all the data from each component into one data set with enough sample size and utilizing the useful information in the censored data. The ROK Navy has been operating multiple Patrol Killer Guided missiles (PKGs) for several years. The Korea Multi-Function Control Console (KMFCC) is one of key components in PKG combat system. The maintenance record for the KMFCC contains less than ten failure observations and a censored datum. This paper proposes a Bayesian approach with a Dirichlet mixture model to estimate failure time density for KMFCC. Trends test for each component record indicated that null hypothesis, that failure occurrence is renewal process, is not rejected. Since the KMFCCs have been functioning under different operating environment, the failure time distribution may be a composition of a number of unknown distributions, i.e. a mixture distribution, rather than a single distribution. The Dirichlet mixture model was coded as probabilistic programming in Python using PyMC3. Then Markov Chain Monte Carlo (MCMC) sampling technique employed in PyMC3 probabilistically estimated the parameters' posterior distribution through the Dirichlet mixture model. The simulation results revealed that the mixture models provide superior fits to the combined data set over single models.

Goodness of Fit Tests of Cox's Proportional Hazards Model

  • Song, Hae-Hiang;Lee, Sun-Ho
    • Journal of the Korean Statistical Society
    • /
    • v.23 no.2
    • /
    • pp.379-402
    • /
    • 1994
  • Graphical and numerical methods for checking the assumption of proportional hazards of Cox model for censored survival data are discussed. The strenths and weaknessess of several goodness of fit tests for the propotional hazards for the two-sample problem are evaluated with Monte Carlo simulations, and the tests of Schoenfeld (1980), Andersen (1982), Wei (1984), and Gill and Schumacher (1987) are considered. The goodness of fit methods are illustrated with the survival data of patients who had chronic liver disease and had been treated with the endoscopy injection sclerotheraphy. Two other examples of data known to have nonpropotional hazards are also used in the illustration.

  • PDF

A Goodness of Fit Approach for Testing NBUFR (NWUFR) and NBAFR (NWAFR) Properties

  • Mahmoud, M.A.W.;Alim, N.A. Abdul
    • International Journal of Reliability and Applications
    • /
    • v.9 no.2
    • /
    • pp.125-140
    • /
    • 2008
  • The new better than used failure rate (NBUFR), Abouammoh and Ahmed (1988), and new better than average failure rate (NBAFR) Loh (1984) classes of life distributions, have been considered in the literature as natural weakenings of NBU (NWU) property. The paper considers testing exponentiality against strictly NBUFR (NBAFR) alternatives, or their duals, based on goodness of fit approach that is possible in life testing problems and that it results in simpler procedures that are asymptotically equivalent or better than standard ones. They may also have superior finite sample behavior. The asymptotic normality are proved. Powers, Pitman asymptotic efficiency and critical points are computed. Dealing with censored data case also studied. Practical applications of our tests in the medical sciences are present.

  • PDF

The Willingness-to-pay for City-gas Safety Improvements (도시가스 안전성 제고(提高)에 대한 소비자 지불의사액 추정)

  • Cho, Yongsung
    • Environmental and Resource Economics Review
    • /
    • v.9 no.5
    • /
    • pp.829-851
    • /
    • 2000
  • This study used the contingent valuation method to determine how much consumers would be willing to pay to improve their city-gas safety and what factors influence consumers' willingness-to-pay (WTP). To elicit this information a mail questionnaire was sent to a randomly selected sample of 2,400 residents who use the city-gas. The survey results showed that individuals were willing to pay $4,750{\pm}342.8$ won per month for the city-gas safety improvement. The aggregate annual WTP was estimated at 121.5~318.0 billion won. To better understand how individual's socio/demographic characteristics affect the WTP, Censored Tobit analysis was used. The results show that higher income, more gas use (cooking and heating), willingness to install a safety instrument significantly increase consumer's WTP.

  • PDF