• Title/Summary/Keyword: Regression

Search Result 34,716, Processing Time 0.052 seconds

Nonlinear Regression Quantile Estimators

  • Park, Seung-Hoe;Kim, Hae kyung;Park, Kyung-Ok
    • Journal of the Korean Statistical Society
    • /
    • v.30 no.4
    • /
    • pp.551-561
    • /
    • 2001
  • This paper deals with the asymptotic properties for statistical inferences of the parameters in nonlinear regression models. As an optimal criterion for robust estimators of the regression parameters, the regression quantile method is proposed. This paper defines the regression quintile estimators in the nonlinear models and provides simple and practical sufficient conditions for the asymptotic normality of the proposed estimators when the parameter space is compact. The efficiency of the proposed estimator is especially well compared with least squares estimator, least absolute deviation estimator under asymmetric error distribution.

  • PDF

Multicollinarity in Logistic Regression

  • Jong-Han lee;Myung-Hoe Huh
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.2
    • /
    • pp.303-309
    • /
    • 1995
  • Many measures to detect multicollinearity in linear regression have been proposed in statistics and numerical analysis literature. Among them, condition number and variance inflation factor(VIF) are most popular. In this study, we give new interpretations of condition number and VIF in linear regression, using geometry on the explanatory space. In the same line, we derive natural measures of condition number and VIF for logistic regression. These computer intensive measures can be easily extended to evaluate multicollinearity in generalized linear models.

  • PDF

Nonparametric Estimation using Regression Quantiles in a Regression Model

  • Han, Sang-Moon;Jung, Byoung-Cheol
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.5
    • /
    • pp.793-802
    • /
    • 2012
  • One proposal is made to construct a nonparametric estimator of slope parameters in a regression model under symmetric error distributions. This estimator is based on the use of the idea of minimizing approximate variance of a proposed estimator using regression quantiles. This nonparametric estimator and some other L-estimators are studied and compared with well known M-estimators through a simulation study.

Support vector quantile regression for autoregressive data

  • Hwang, Hyungtae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1539-1547
    • /
    • 2014
  • In this paper we apply the autoregressive process to the nonlinear quantile regression in order to infer nonlinear quantile regression models for the autocorrelated data. We propose a kernel method for the autoregressive data which estimates the nonlinear quantile regression function by kernel machines. Artificial and real examples are provided to indicate the usefulness of the proposed method for the estimation of quantile regression function in the presence of autocorrelation between data.

Estimation of Jump Points in Nonparametric Regression

  • Park, Dong-Ryeon
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.6
    • /
    • pp.899-908
    • /
    • 2008
  • If the regression function has jump points, nonparametric estimation method based on local smoothing is not statistically consistent. Therefore, when we estimate regression function, it is quite important to know whether it is reasonable to assume that regression function is continuous. If the regression function appears to have jump points, then we should estimate first the location of jump points. In this paper, we propose a procedure which can do both the testing hypothesis of discontinuity of regression function and the estimation of the number and the location of jump points simultaneously. The performance of the proposed method is evaluated through a simulation study. We also apply the procedure to real data sets as examples.

Tree-Structured Nonlinear Regression

  • Chang, Young-Jae;Kim, Hyeon-Soo
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.759-768
    • /
    • 2011
  • Tree algorithms have been widely developed for regression problems. One of the good features of a regression tree is the flexibility of fitting because it can correctly capture the nonlinearity of data well. Especially, data with sudden structural breaks such as the price of oil and exchange rates could be fitted well with a simple mixture of a few piecewise linear regression models. Now that split points are determined by chi-squared statistics related with residuals from fitting piecewise linear models and the split variable is chosen by an objective criterion, we can get a quite reasonable fitting result which goes in line with the visual interpretation of data. The piecewise linear regression by a regression tree can be used as a good fitting method, and can be applied to a dataset with much fluctuation.

Performance study of propensity score methods against regression with covariate adjustment

  • Park, Jincheol
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.1
    • /
    • pp.217-227
    • /
    • 2015
  • In observational study, handling confounders is a primary issue in measuring treatment effect of interest. Historically, a regression with covariate adjustment (covariate-adjusted regression) has been the typical approach to estimate treatment effect incorporating potential confounders into model. However, ever since the introduction of the propensity score, covariate-adjusted regression has been gradually replaced in medical literatures with various balancing methods based on propensity score. On the other hand, there is only a paucity of researches assessing propensity score methods compared with the covariate-adjusted regression. This paper examined the performance of propensity score methods in estimating risk difference and compare their performance with the covariate-adjusted regression by a Monte Carlo study. The study demonstrated in general the covariate-adjusted regression with variable selection procedure outperformed propensity-score-based methods in terms both of bias and MSE, suggesting that the classical regression method needs to be considered, rather than the propensity score methods, if a performance is a primary concern.

Regression analysis and recursive identification of the regression model with unknown operational parameter variables, and its application to sequential design

  • Huang, Zhaoqing;Yang, Shiqiong;Sagara, Setsuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10b
    • /
    • pp.1204-1209
    • /
    • 1990
  • This paper offers the theory and method for regression analysis of the regression model with operational parameter variables based on the fundamentals of mathematical statistics. Regression coefficients are usually constants related to the problem of regression analysis. This paper considers that regression coefficients are not constants but the functions of some operational parameter variables. This is a kind of method of two-step fitting regression model. The second part of this paper considers the experimental step numbers as recursive variables, the recursive identification with unknown operational parameter variables, which includes two recursive variables, is deduced. Then the optimization and the recursive identification are combined to obtain the sequential experiment optimum design with operational parameter variables. This paper also offers a fast recursive algorithm for a large number of sequential experiments.

  • PDF

A comparative study of the Gini coefficient estimators based on the regression approach

  • Mirzaei, Shahryar;Borzadaran, Gholam Reza Mohtashami;Amini, Mohammad;Jabbari, Hadi
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.339-351
    • /
    • 2017
  • Resampling approaches were the first techniques employed to compute a variance for the Gini coefficient; however, many authors have shown that an analysis of the Gini coefficient and its corresponding variance can be obtained from a regression model. Despite the simplicity of the regression approach method to compute a standard error for the Gini coefficient, the use of the proposed regression model has been challenging in economics. Therefore in this paper, we focus on a comparative study among the regression approach and resampling techniques. The regression method is shown to overestimate the standard error of the Gini index. The simulations show that the Gini estimator based on the modified regression model is also consistent and asymptotically normal with less divergence from normal distribution than other resampling techniques.

MULTIPLE OUTLIER DETECTION IN LOGISTIC REGRESSION BY USING INFLUENCE MATRIX

  • Lee, Gwi-Hyun;Park, Sung-Hyun
    • Journal of the Korean Statistical Society
    • /
    • v.36 no.4
    • /
    • pp.457-469
    • /
    • 2007
  • Many procedures are available to identify a single outlier or an isolated influential point in linear regression and logistic regression. But the detection of influential points or multiple outliers is more difficult, owing to masking and swamping problems. The multiple outlier detection methods for logistic regression have not been studied from the points of direct procedure yet. In this paper we consider the direct methods for logistic regression by extending the $Pe\tilde{n}a$ and Yohai (1995) influence matrix algorithm. We define the influence matrix in logistic regression by using Cook's distance in logistic regression, and test multiple outliers by using the mean shift model. To show accuracy of the proposed multiple outlier detection algorithm, we simulate artificial data including multiple outliers with masking and swamping.