• Title/Summary/Keyword: 주성분 회귀모형

Search Result 42, Processing Time 0.023 seconds

Simple principal component analysis using Lasso (라소를 이용한 간편한 주성분분석)

  • Park, Cheolyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.533-541
    • /
    • 2013
  • In this study, a simple principal component analysis using Lasso is proposed. This method consists of two steps. The first step is to compute principal components by the principal component analysis. The second step is to regress each principal component on the original data matrix by Lasso regression method. Each of new principal components is computed as the linear combination of original data matrix using the scaled estimated Lasso regression coefficient as the coefficients of the combination. This method leads to easily interpretable principal components with more 0 coefficients by the properties of Lasso regression models. This is because the estimator of the regression of each principal component on the original data matrix is the corresponding eigenvector. This method is applied to real and simulated data sets with the help of an R package for Lasso regression and its usefulness is demonstrated.

Estimation of S&T Knowledge Production Function Using Principal Component Regression Model (주성분 회귀모형을 이용한 과학기술 지식생산함수 추정)

  • Park, Su-Dong;Sung, Oong-Hyun
    • Journal of Korea Technology Innovation Society
    • /
    • v.13 no.2
    • /
    • pp.231-251
    • /
    • 2010
  • The numbers of SCI paper or patent in science and technology are expected to be related with the number of researcher and knowledge stock (R&D stock, paper stock, patent stock). The results of the regression model showed that severe multicollinearity existed and errors were made in the estimation and testing of regression coefficients. To solve the problem of multicollinearity and estimate the effect of the independent variable properly, principal component regression model were applied for three cases with S&T knowledge production. The estimated principal component regression function was transformed into original independent variables to interpret properly its effect. The analysis indicated that the principal component regression model was useful to estimate the effect of the highly correlate production factors and showed that the number of researcher, R&D stock, paper or patent stock had all positive effect on the production of paper or patent.

  • PDF

Procedure for the Selection of Principal Components in Principal Components Regression (주성분회귀분석에서 주성분선정을 위한 새로운 방법)

  • Kim, Bu-Yong;Shin, Myung-Hee
    • The Korean Journal of Applied Statistics
    • /
    • v.23 no.5
    • /
    • pp.967-975
    • /
    • 2010
  • Since the least squares estimation is not appropriate when multicollinearity exists among the regressors of the linear regression model, the principal components regression is used to deal with the multicollinearity problem. This article suggests a new procedure for the selection of suitable principal components. The procedure is based on the condition index instead of the eigenvalue. The principal components corresponding to the indices are removed from the model if any condition indices are larger than the upper limit of the cutoff value. On the other hand, the corresponding principal components are included if any condition indices are smaller than the lower limit. The forward inclusion method is employed to select proper principal components if any condition indices are between the upper limit and the lower limit. The limits are obtained from the linear model which is constructed on the basis of the conjoint analysis. The procedure is evaluated by Monte Carlo simulation in terms of the mean square error of estimator. The simulation results indicate that the proposed procedure is superior to the existing methods.

Predicting Korea Pro-Baseball Rankings by Principal Component Regression Analysis (주성분회귀분석을 이용한 한국프로야구 순위)

  • Bae, Jae-Young;Lee, Jin-Mok;Lee, Jea-Young
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.367-379
    • /
    • 2012
  • In baseball rankings, prediction has been a subject of interest for baseball fans. To predict these rankings, (based on 2011 data from Korea Professional Baseball records) the arithmetic mean method, the weighted average method, principal component analysis, and principal component regression analysis is presented. By standardizing the arithmetic average, the correlation coefficient using the weighted average method, using principal components analysis to predict rankings, the final model was selected as a principal component regression model. By practicing regression analysis with a reduced variable by principal component analysis, we propose a rank predictability model of a pitcher part, a batter part and a pitcher batter part. We can estimate a 2011 rank of pro-baseball by a predicted regression model. By principal component regression analysis, the pitcher part, the other part, the pitcher and the batter part of the ranking prediction model is proposed. The regression model predicts the rankings for 2012.

A Criterion for the Selection of Principal Components in the Robust Principal Component Regression (로버스트주성분회귀에서 최적의 주성분선정을 위한 기준)

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.6
    • /
    • pp.761-770
    • /
    • 2011
  • Robust principal components regression is suggested to deal with both the multicollinearity and outlier problem. A main aspect of the robust principal components regression is the selection of an optimal set of principal components. Instead of the eigenvalue of the sample covariance matrix, a selection criterion is developed based on the condition index of the minimum volume ellipsoid estimator which is highly robust against leverage points. In addition, the least trimmed squares estimation is employed to cope with regression outliers. Monte Carlo simulation results indicate that the proposed criterion is superior to existing ones.

Performance Comparison of Data Mining Approaches for Prediction Models of Near Infrared Spectroscopy Data (근적외선 분광 데이터 예측 모형을 위한 데이터 마이닝 기법의 성능비교)

  • Baek, Seung Hyun
    • Journal of the Korea Safety Management & Science
    • /
    • v.15 no.4
    • /
    • pp.311-315
    • /
    • 2013
  • 본 논문에서는 주성분 회귀법과 부분최소자승 회귀법을 비교하여 보여준다. 이 비교의 목적은 선형형태를 보유한 근적외선 분광 데이터의 분석에 사용할 수 있는 적합한 예측 방법을 찾기 위해서이다. 두 가지 데이터 마이닝 방법론인 주성분 회귀법과 부분최소자승 회귀법이 비교되어 질 것이다. 본 논문에서는 부분최소자승 회귀법은 주성분 회귀법과 비교했을 때 약간 나은 예측능력을 가진 결과를 보여준다. 주성분 회귀법에서 50개의 주성분이 모델을 생성하기 위해서 사용지만 부분최소자승 회귀법에서는 12개의 잠재요소가 사용되었다. 평균제곱오차가 예측능력을 측정하는 도구로 사용되었다. 본 논문의 근적외선 분광데이터 분석에 따르면 부분최소자승회귀법이 선형경향을 가진 데이터의 예측에 가장 적합한 모델로 판명되었다.

Improving Polynomial Regression Using Principal Components Regression With the Example of the Numerical Inversion of Probability Generating Function (주성분회귀분석을 활용한 다항회귀분석 성능개선: PGF 수치역변환 사례를 중심으로)

  • Yang, Won Seok;Park, Hyun-Min
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.1
    • /
    • pp.475-481
    • /
    • 2015
  • We use polynomial regression instead of linear regression if there is a nonlinear relation between a dependent variable and independent variables in a regression analysis. The performance of polynomial regression, however, may deteriorate because of the correlation caused by the power terms of independent variables. We present a polynomial regression model for the numerical inversion of PGF and show that polynomial regression results in the deterioration of the estimation of the coefficients. We apply principal components regression to the polynomial regression model and show that principal components regression dramatically improves the performance of the parameter estimation.

Development of Regression Models Resolving High-Dimensional Data and Multicollinearity Problem for Heavy Rain Damage Data (호우피해자료에서의 고차원 자료 및 다중공선성 문제를 해소한 회귀모형 개발)

  • Kim, Jeonghwan;Park, Jihyun;Choi, Changhyun;Kim, Hung Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.801-808
    • /
    • 2018
  • The learning of the linear regression model is stable on the assumption that the sample size is sufficiently larger than the number of explanatory variables and there is no serious multicollinearity between explanatory variables. In this study, we investigated the difficulty of model learning when the assumption was violated by analyzing a real heavy rain damage data and we proposed to use a principal component regression model or a ridge regression model after integrating data to overcome the difficulty. We evaluated the predictive performance of the proposed models by using the test data independent from the training data, and confirmed that the proposed methods showed better predictive performances than the linear regression model.

Principal Components Regression in Logistic Model (로지스틱모형에서의 주성분회귀)

  • Kim, Bu-Yong;Kahng, Myung-Wook
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.4
    • /
    • pp.571-580
    • /
    • 2008
  • The logistic regression analysis is widely used in the area of customer relationship management and credit risk management. It is well known that the maximum likelihood estimation is not appropriate when multicollinearity exists among the regressors. Thus we propose the logistic principal components regression to deal with the multicollinearity problem. In particular, new method is suggested to select proper principal components. The selection method is based on the condition index instead of the eigenvalue. When a condition index is larger than the upper limit of cutoff value, principal component corresponding to the index is removed from the estimation. And hypothesis test is sequentially employed to eliminate the principal component when a condition index is between the upper limit and the lower limit. The limits are obtained by a linear model which is constructed on the basis of the conjoint analysis. The proposed method is evaluated by means of the variance of the estimates and the correct classification rate. The results indicate that the proposed method is superior to the existing method in terms of efficiency and goodness of fit.

Principal Components Logistic Regression based on Robust Estimation (로버스트추정에 바탕을 둔 주성분로지스틱회귀)

  • Kim, Bu-Yong;Kahng, Myung-Wook;Jang, Hea-Won
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.3
    • /
    • pp.531-539
    • /
    • 2009
  • Logistic regression is widely used as a datamining technique for the customer relationship management. The maximum likelihood estimator has highly inflated variance when multicollinearity exists among the regressors, and it is not robust against outliers. Thus we propose the robust principal components logistic regression to deal with both multicollinearity and outlier problem. A procedure is suggested for the selection of principal components, which is based on the condition index. When a condition index is larger than the cutoff value obtained from the model constructed on the basis of the conjoint analysis, the corresponding principal component is removed from the logistic model. In addition, we employ an algorithm for the robust estimation, which strives to dampen the effect of outliers by applying the appropriate weights and factors to the leverage points and vertical outliers identified by the V-mask type criterion. The Monte Carlo simulation results indicate that the proposed procedure yields higher rate of correct classification than the existing method.