• Title/Summary/Keyword: robust regression

Search Result 361, Processing Time 0.028 seconds

Usage of auxiliary variable and neural network in doubly robust estimation

  • Park, Hyeonah;Park, Wonjun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.659-667
    • /
    • 2013
  • If the regression model or the propensity model is correct, the unbiasedness of the estimator using doubly robust imputation can be guaranteed. Using a neural network instead of a logistic regression model for the propensity model, the estimators using doubly robust imputation are approximately unbiased even though both assumed models fail. We also propose a doubly robust estimator of ratio form using population information of an auxiliary variable. We prove some properties of proposed theory by restricted simulations.

Fast robust variable selection using VIF regression in large datasets (대형 데이터에서 VIF회귀를 이용한 신속 강건 변수선택법)

  • Seo, Han Son
    • The Korean Journal of Applied Statistics
    • /
    • v.31 no.4
    • /
    • pp.463-473
    • /
    • 2018
  • Variable selection algorithms for linear regression models of large data are considered. Many algorithms are proposed focusing on the speed and the robustness of algorithms. Among them variance inflation factor (VIF) regression is fast and accurate due to the use of a streamwise regression approach. But a VIF regression is susceptible to outliers because it estimates a model by a least-square method. A robust criterion using a weighted estimator has been proposed for the robustness of algorithm; in addition, a robust VIF regression has also been proposed for the same purpose. In this article a fast and robust variable selection method is suggested via a VIF regression with detecting and removing potential outliers. A simulation study and an analysis of a dataset are conducted to compare the suggested method with other methods.

Model-Robust G-Efficient Cuboidal Experimental Designs (입방형 영역에서의 G-효율이 높은 Model-Robust 실험설계)

  • Park, You-Jin;Yi, Yoon-Ju
    • IE interfaces
    • /
    • v.23 no.2
    • /
    • pp.118-125
    • /
    • 2010
  • The determination of a regression model is important in using statistical designs of experiments. Generally, the exact regression model is not known, and experimenters suppose that a certain model form will be fit. Then an experimental design suitable for that predetermined model form is selected and the experiment is conducted. However, the initially chosen regression model may not be correct, and this can result in undesirable statistical properties. We develop model-robust experimental designs that have stable prediction variance for a family of candidate regression models over a cuboidal region by using genetic algorithms and the desirability function method. We then compare the stability of prediction variance of model-robust experimental designs with those of the 3-level face centered cube. These model-robust experimental designs have moderately high G-efficiencies for all candidate models that the experimenter may potentially wish to fit, and outperform the cuboidal design for the second-order model. The G-efficiencies are provided for the model-robust experimental designs and the face centered cube.

A Study on the Several Robust Regression Estimators

  • Kim, Jee-Yun;Roh, Kyung-Mi;Hwang, Jin-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.307-316
    • /
    • 2004
  • Principal Component Regression(PCR) and Partial Least Squares Regression(PLSR) are the two most popular regression techniques in chemometrics. In the field of chemometrics usually the number of regressor variables greatly exceeds the number of observation. So we have to reduce the number of regressors to avoid the identifiability problem. In this paper we compare PCR and PLSR techniques combined with various robust regression methods including regression depth estimation. We compare the efficiency, goodness-of-fit and robustness of each estimators under several contamination schemes.

  • PDF

A Criterion for the Selection of Principal Components in the Robust Principal Component Regression (로버스트주성분회귀에서 최적의 주성분선정을 위한 기준)

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.6
    • /
    • pp.761-770
    • /
    • 2011
  • Robust principal components regression is suggested to deal with both the multicollinearity and outlier problem. A main aspect of the robust principal components regression is the selection of an optimal set of principal components. Instead of the eigenvalue of the sample covariance matrix, a selection criterion is developed based on the condition index of the minimum volume ellipsoid estimator which is highly robust against leverage points. In addition, the least trimmed squares estimation is employed to cope with regression outliers. Monte Carlo simulation results indicate that the proposed criterion is superior to existing ones.

A study on robust regression estimators in heteroscedastic error models

  • Son, Nayeong;Kim, Mijeong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.5
    • /
    • pp.1191-1204
    • /
    • 2017
  • Weighted least squares (WLS) estimation is often easily used for the data with heteroscedastic errors because it is intuitive and computationally inexpensive. However, WLS estimator is less robust to a few outliers and sometimes it may be inefficient. In order to overcome robustness problems, Box-Cox transformation, Huber's M estimation, bisquare estimation, and Yohai's MM estimation have been proposed. Also, more efficient estimations than WLS have been suggested such as Bayesian methods (Cepeda and Achcar, 2009) and semiparametric methods (Kim and Ma, 2012) in heteroscedastic error models. Recently, Çelik (2015) proposed the weight methods applicable to the heteroscedasticity patterns including butterfly-distributed residuals and megaphone-shaped residuals. In this paper, we review heteroscedastic regression estimators related to robust or efficient estimation and describe their properties. Also, we analyze cost data of U.S. Electricity Producers in 1955 using the methods discussed in the paper.

The Identification Of Multiple Outliers

  • Park, Jin-Pyo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.2
    • /
    • pp.201-215
    • /
    • 2000
  • The classical method for regression analysis is the least squares method. However, if the data contain significant outliers, the least squares estimator can be broken down by outliers. To remedy this problem, the robust methods are important complement to the least squares method. Robust methods down weighs or completely ignore the outliers. This is not always best because the outliers can contain some very important information about the population. If they can be detected, the outliers can be further inspected and appropriate action can be taken based on the results. In this paper, I propose a sequential outlier test to identify outliers. It is based on the nonrobust estimate and the robust estimate of scatter of a robust regression residuals and is applied in forward procedure, removing the most extreme data at each step, until the test fails to detect outliers. Unlike other forward procedures, the present one is unaffected by swamping or masking effects because the statistics is based on the robust regression residuals. I show the asymptotic distribution of the test statistics and apply the test to several real data and simulated data for the test to be shown to perform fairly well.

  • PDF

Statistical Matching Techniques Using the Robust Regression Model (로버스트 회귀모형을 이용한 자료결합방법)

  • Jhun, Myoung-Shic;Jung, Ji-Song;Park, Hye-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.6
    • /
    • pp.981-996
    • /
    • 2008
  • Statistical matching techniques whose aim is to achieve a complete data file from different sources. Since the statistical matching method proposed by Rubin (1986) assumes the multivariate normality for data, using this method to data which violates the assumption would involve some problems. This research proposed the statistical matching method using robust regression as an alternative to the linear regression. Furthermore, we carried out a simulation study to compare the performance of the robust regression model and the linear regression model for the statistical matching.

On Confidence Intervals of High Breakdown Regression Estimators

  • Lee Dong-Hee;Park YouSung;Kim Kang-yong
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2004.11a
    • /
    • pp.205-210
    • /
    • 2004
  • A weighted self-tuning robust regression estimator (WSTE) has the high breakdown point for estimating regression parameters such as other well known high breakdown estimators. In this paper, we propose to obtain standard quantities like confidence intervals, and it is found to be superior to the other high breakdown regression estimators when a sample is contaminated

  • PDF

Nonparametric M-Estimation for Functional Spatial Data

  • Attouch, Mohammed Kadi;Chouaf, Benamar;Laksaci, Ali
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.1
    • /
    • pp.193-211
    • /
    • 2012
  • This paper deals with robust nonparametric regression analysis when the regressors are functional random fields. More precisely, we consider $Z_i=(X_i,Y_i)$, $i{\in}\mathbb{N}^N$ be a $\mathcal{F}{\times}\mathbb{R}$-valued measurable strictly stationary spatial process, where $\mathcal{F}$ is a semi-metric space and we study the spatial interaction of $X_i$ and $Y_i$ via the robust estimation for the regression function. We propose a family of robust nonparametric estimators for regression function based on the kernel method. The main result of this work is the establishment of the asymptotic normality of these estimators, under some general mixing and small ball probability conditions.