• Title/Summary/Keyword: regression outlier

Search Result 116, Processing Time 0.023 seconds

Speed-up of the Matrix Computation on the Ridge Regression

  • Lee, Woochan;Kim, Moonseong;Park, Jaeyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3482-3497
    • /
    • 2021
  • Artificial intelligence has emerged as the core of the 4th industrial revolution, and large amounts of data processing, such as big data technology and rapid data analysis, are inevitable. The most fundamental and universal data interpretation technique is an analysis of information through regression, which is also the basis of machine learning. Ridge regression is a technique of regression that decreases sensitivity to unique or outlier information. The time-consuming calculation portion of the matrix computation, however, basically includes the introduction of an inverse matrix. As the size of the matrix expands, the matrix solution method becomes a major challenge. In this paper, a new algorithm is introduced to enhance the speed of ridge regression estimator calculation through series expansion and computation recycle without adopting an inverse matrix in the calculation process or other factorization methods. In addition, the performances of the proposed algorithm and the existing algorithm were compared according to the matrix size. Overall, excellent speed-up of the proposed algorithm with good accuracy was demonstrated.

Robust tests for heteroscedasticity using outlier detection methods (이상치 탐지법을 이용한 강건 이분산 검정)

  • Seo, Han Son;Yoon, Min
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.3
    • /
    • pp.399-408
    • /
    • 2016
  • There is a need to detect heteroscedasticity in a regression analysis; however, it invalidates the standard inference procedure. The diagnostics on heteroscedasticity may be distorted when both outliers and heteroscedasticity exist. Available heteroscedasticity detection methods in the presence of outliers usually use robust estimators or separating outliers from the data. Several approaches have been suggested to identify outliers in the heteroscedasticity problem. In this article conventional tests on heteroscedasticity are modified by using a sequential outlier detection methods to separate outliers from contaminated data. The performance of the proposed method is compared with original tests by a Monte Carlo study and examples.

An Improved RSR Method to Obtain the Sparse Projection Matrix (희소 투영행렬 획득을 위한 RSR 개선 방법론)

  • Ahn, Jung-Ho
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.605-613
    • /
    • 2015
  • This paper addresses the problem to make sparse the projection matrix in pattern recognition method. Recently, the size of computer program is often restricted in embedded systems. It is very often that developed programs include some constant data. For example, many pattern recognition programs use the projection matrix for dimension reduction. To improve the recognition performance, very high dimensional feature vectors are often extracted. In this case, the projection matrix can be very big. Recently, RSR(roated sparse regression) method[1] was proposed. This method has been proved one of the best algorithm that obtains the sparse matrix. We propose three methods to improve the RSR; outlier removal, sampling and elastic net RSR(E-RSR) in which the penalty term in RSR optimization function is replaced by that of the elastic net regression. The experimental results show that the proposed methods are very effective and improve the sparsity rate dramatically without sacrificing the recognition rate compared to the original RSR method.

Robust Response Transformation Using Outlier Detection in Regression Model (회귀모형에서 이상치 검색을 이용한 로버스트 변수변환방법)

  • Seo, Han-Son;Lee, Ga-Yoen;Yoon, Min
    • The Korean Journal of Applied Statistics
    • /
    • v.25 no.1
    • /
    • pp.205-213
    • /
    • 2012
  • Transforming response variable is a general tool to adapt data to a linear regression model. However, it is well known that response transformations in linear regression are very sensitive to one or a few outliers. Many methods have been suggested to develop transformations that will not be influenced by potential outliers. Recently Cheng (2005) suggested to using a trimmed likelihood estimator based on the idea of the least trimmed squares estimator(LTS). However, the method requires presetting the number of outliers and needs many computations. A new method is proposed, that can solve the problems addressed and improve the robustness of the estimates. The method uses a stepwise procedure, suggested by Hadi and Simonoff (1993), to detect outliers that determine response transformations.

Principal Components Logistic Regression based on Robust Estimation (로버스트추정에 바탕을 둔 주성분로지스틱회귀)

  • Kim, Bu-Yong;Kahng, Myung-Wook;Jang, Hea-Won
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.531-539
    • /
    • 2009
  • Logistic regression is widely used as a datamining technique for the customer relationship management. The maximum likelihood estimator has highly inflated variance when multicollinearity exists among the regressors, and it is not robust against outliers. Thus we propose the robust principal components logistic regression to deal with both multicollinearity and outlier problem. A procedure is suggested for the selection of principal components, which is based on the condition index. When a condition index is larger than the cutoff value obtained from the model constructed on the basis of the conjoint analysis, the corresponding principal component is removed from the logistic model. In addition, we employ an algorithm for the robust estimation, which strives to dampen the effect of outliers by applying the appropriate weights and factors to the leverage points and vertical outliers identified by the V-mask type criterion. The Monte Carlo simulation results indicate that the proposed procedure yields higher rate of correct classification than the existing method.

A Study of the Roust Degradation Model by Analyzing the Filament Lamp Degradation Data (헤드램프용 필라멘트 램프 가속열화데이터 분석을 통한 로버스트 열화모형 연구)

  • Sung, Ki-Woo
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.20 no.6
    • /
    • pp.132-139
    • /
    • 2012
  • It is generally needed to test durability and lifetime when we develop parts in new technology. In this paper, the accelerated degradation analysis methods are developed to test them. This study is presented robust model estimation method that is less affected by outlier in regresstion model estimation. In addition, the lifetime can be predicted by Degradation-stress relationship in stress level.

A Prediction Method of Learning Outcomes based on Regression Model for Effective Peer Review Learning (효율적인 피어리뷰 학습을 위한 회귀 모델 기반 학습성과 예측 방법)

  • Shin, Hyo-Joung;Jung, Hye-Wuk;Cho, Kwang-Su;Lee, Jee-Hyoung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.624-630
    • /
    • 2012
  • The peer review learning is a method which improves learning outcome of students through feedback between students and the observation and analysis of other students. One of the important problems in a peer review system is to find proper evaluators to each learner considering characteristics of students for improving learning outcomes. Some of peer review systems randomly assign peer review evaluators to learners, or chose evaluators based on limited strategies. However, these systems have a problem that they do not consider various characteristics of learners and evaluators who participate in peer reviews. In this paper, we propose a novel prediction approach of learning outcomes to apply peer review systems considering various characteristics of learners and evaluators. The proposed approach extracts representative attributes from the profiles of students and predicts learning outcomes using various regression models. In order to verify how much outliers affect on the prediction of learning outcomes, we also apply several outlier removal methods to the regression models and compare the predictive performance of learning outcomes. The experiment result says that the SVR model which does not removes outliers shows an error rate of 0.47% on average and has the best predictive performance.

Identification of Regression Outliers Based on Clustering of LMS-residual Plots

  • Kim, Bu-Yong;Oh, Mi-Hyun
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.485-494
    • /
    • 2004
  • An algorithm is proposed to identify multiple outliers in linear regression. It is based on the clustering of residuals from the least median of squares estimation. A cut-height criterion for the hierarchical cluster tree is suggested, which yields the optimal clustering of the regression outliers. Comparisons of the effectiveness of the procedures are performed on the basis of the classic data and artificial data sets, and it is shown that the proposed algorithm is superior to the one that is based on the least squares estimation. In particular, the algorithm deals very well with the masking and swamping effects while the other does not.

A Bayesian Approach to Detecting Outliers Using Variance-Inflation Model

  • Lee, Sangjeen;Chung, Younshik
    • Communications for Statistical Applications and Methods
    • /
    • v.8 no.3
    • /
    • pp.805-814
    • /
    • 2001
  • The problem of 'outliers', observations which look suspicious in some way, has long been one of the most concern in the statistical structure to experimenters and data analysts. We propose a model for outliers problem and also analyze it in linear regression model using a Bayesian approach with the variance-inflation model. We will use Geweke's(1996) ideas which is based on the data augmentation method for detecting outliers in linear regression model. The advantage of the proposed method is to find a subset of data which is most suspicious in the given model by the posterior probability The sampling based approach can be used to allow the complicated Bayesian computation. Finally, our proposed methodology is applied to a simulated and a real data.

  • PDF

The Identification Of Multiple Outliers

  • Park, Jin-Pyo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.2
    • /
    • pp.201-215
    • /
    • 2000
  • The classical method for regression analysis is the least squares method. However, if the data contain significant outliers, the least squares estimator can be broken down by outliers. To remedy this problem, the robust methods are important complement to the least squares method. Robust methods down weighs or completely ignore the outliers. This is not always best because the outliers can contain some very important information about the population. If they can be detected, the outliers can be further inspected and appropriate action can be taken based on the results. In this paper, I propose a sequential outlier test to identify outliers. It is based on the nonrobust estimate and the robust estimate of scatter of a robust regression residuals and is applied in forward procedure, removing the most extreme data at each step, until the test fails to detect outliers. Unlike other forward procedures, the present one is unaffected by swamping or masking effects because the statistics is based on the robust regression residuals. I show the asymptotic distribution of the test statistics and apply the test to several real data and simulated data for the test to be shown to perform fairly well.

  • PDF