• 제목/요약/키워드: regression statistics

검색결과 5,318건 처리시간 0.034초

The Bivariate Kumaraswamy Weibull regression model: a complete classical and Bayesian analysis

  • Fachini-Gomes, Juliana B.;Ortega, Edwin M.M.;Cordeiro, Gauss M.;Suzuki, Adriano K.
    • Communications for Statistical Applications and Methods
    • /
    • 제25권5호
    • /
    • pp.523-544
    • /
    • 2018
  • Bivariate distributions play a fundamental role in survival and reliability studies. We consider a regression model for bivariate survival times under right-censored based on the bivariate Kumaraswamy Weibull (Cordeiro et al., Journal of the Franklin Institute, 347, 1399-1429, 2010) distribution to model the dependence of bivariate survival data. We describe some structural properties of the marginal distributions. The method of maximum likelihood and a Bayesian procedure are adopted to estimate the model parameters. We use diagnostic measures based on the local influence and Bayesian case influence diagnostics to detect influential observations in the new model. We also show that the estimates in the bivariate Kumaraswamy Weibull regression model are robust to deal with the presence of outliers in the data. In addition, we use some measures of goodness-of-fit to evaluate the bivariate Kumaraswamy Weibull regression model. The methodology is illustrated by means of a real lifetime data set for kidney patients.

Tree-Structured Nonlinear Regression

  • Chang, Young-Jae;Kim, Hyeon-Soo
    • 응용통계연구
    • /
    • 제24권5호
    • /
    • pp.759-768
    • /
    • 2011
  • Tree algorithms have been widely developed for regression problems. One of the good features of a regression tree is the flexibility of fitting because it can correctly capture the nonlinearity of data well. Especially, data with sudden structural breaks such as the price of oil and exchange rates could be fitted well with a simple mixture of a few piecewise linear regression models. Now that split points are determined by chi-squared statistics related with residuals from fitting piecewise linear models and the split variable is chosen by an objective criterion, we can get a quite reasonable fitting result which goes in line with the visual interpretation of data. The piecewise linear regression by a regression tree can be used as a good fitting method, and can be applied to a dataset with much fluctuation.

MULTIPLE OUTLIER DETECTION IN LOGISTIC REGRESSION BY USING INFLUENCE MATRIX

  • Lee, Gwi-Hyun;Park, Sung-Hyun
    • Journal of the Korean Statistical Society
    • /
    • 제36권4호
    • /
    • pp.457-469
    • /
    • 2007
  • Many procedures are available to identify a single outlier or an isolated influential point in linear regression and logistic regression. But the detection of influential points or multiple outliers is more difficult, owing to masking and swamping problems. The multiple outlier detection methods for logistic regression have not been studied from the points of direct procedure yet. In this paper we consider the direct methods for logistic regression by extending the $Pe\tilde{n}a$ and Yohai (1995) influence matrix algorithm. We define the influence matrix in logistic regression by using Cook's distance in logistic regression, and test multiple outliers by using the mean shift model. To show accuracy of the proposed multiple outlier detection algorithm, we simulate artificial data including multiple outliers with masking and swamping.

Bootstrap Confidence Intervals for Regression Coefficients under Censored Data

  • 조길호;정성화
    • Journal of the Korean Data and Information Science Society
    • /
    • 제13권2호
    • /
    • pp.355-363
    • /
    • 2002
  • Using the Buckley-James method, we construct bootstrap confidence intervals for the regression coefficients under the censored data. And we compare these confidence intervals in terms of the coverage probabilities and the expected confidence interval lengths through Monte Carlo simulation.

  • PDF

INFERENCE AFTER STOCHASTIC REGRESSION IMPUTATION UNDER RESPONSE MODEL

  • Kim, Jae-Kwang;Kim, Yong-Dai
    • Journal of the Korean Statistical Society
    • /
    • 제32권2호
    • /
    • pp.103-119
    • /
    • 2003
  • Properties of stochastic regression imputation are discussed under the uniform within-cell response model. Variance estimator is proposed and its asymptotic properties are discussed. A limited simulation is also presented.

Diagnostic for Smoothing Parameter Estimate in Nonparametric Regression Model

  • In-Suk Lee;Won-Tae Jung
    • Communications for Statistical Applications and Methods
    • /
    • 제2권2호
    • /
    • pp.266-276
    • /
    • 1995
  • We have considered the study of local influence for smoothing parameter estimates in nonparametric regression model. Practically, generalized cross validation(GCV) does not work well in the presence of data perturbation. Thus we have proposed local influence measures for GCV estimates and examined effects of diagnostic by above measures.

  • PDF

Testing Outliers in Nonlinear Regression

  • Kahng, Myung-Wook
    • Journal of the Korean Statistical Society
    • /
    • 제24권2호
    • /
    • pp.419-437
    • /
    • 1995
  • Given the specific mean shift outlier model, several standard approaches to obtaining test statistic for outliers are discussed. Each of these is developed in detail for the nonlinear regression model, and each leads to an equivalent distribution. The geometric interpretations of the statistics and accuracy of linear approximation are also presented.

  • PDF

Equivalence of GLS and Difference Estimator in the Linear Regression Model under Seasonally Autocorrelated Disturbances

  • Seuck Heun Song;Jong Hyup Lee
    • Communications for Statistical Applications and Methods
    • /
    • 제1권1호
    • /
    • pp.112-118
    • /
    • 1994
  • The generalized least squares estimator in the linear regression model is equivalent to difference estimator irrespective of the particular form of the regressor matrix when the disturbances are generated by a seasonally autoregressive provess and autocorrelation is closed to unity.

  • PDF

Smoothing Parameter Selection Using Multifold Cross-Validation in Smoothing Spline Regressions

  • Hong, Changkon;Kim, Choongrak;Yoon, Misuk
    • Communications for Statistical Applications and Methods
    • /
    • 제5권2호
    • /
    • pp.277-285
    • /
    • 1998
  • The smoothing parameter $\lambda$ in smoothing spline regression is usually selected by minimizing cross-validation (CV) or generalized cross-validation (GCV). But, simple CV or GCV is poor candidate for estimating prediction error. We defined MGCV (Multifold Generalized Cross-validation) as a criterion for selecting smoothing parameter in smoothing spline regression. This is a version of cross-validation using $leave-\kappa-out$ method. Some numerical results comparing MGCV and GCV are done.

  • PDF