• Title/Summary/Keyword: Data Principal

Search Result 2,067, Processing Time 0.028 seconds

Classification via principal differential analysis

  • Jang, Eunseong;Lim, Yaeji
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.2
    • /
    • pp.135-150
    • /
    • 2021
  • We propose principal differential analysis based classification methods. Computations of squared multiple correlation function (RSQ) and principal differential analysis (PDA) scores are reviewed; in addition, we combine principal differential analysis results with the logistic regression for binary classification. In the numerical study, we compare the principal differential analysis based classification methods with functional principal component analysis based classification. Various scenarios are considered in a simulation study, and principal differential analysis based classification methods classify the functional data well. Gene expression data is considered for real data analysis. We observe that the PDA score based method also performs well.

Numerical Investigations in Choosing the Number of Principal Components in Principal Component Regression - CASE I

  • Shin, Jae-Kyoung;Moon, Sung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.8 no.2
    • /
    • pp.127-134
    • /
    • 1997
  • A method is proposed for the choice of the number of principal components in principal component regression based on the predicted error sum of squares. To do this, we approximately evaluate that statistic using a linear approximation based on the perturbation expansion. In this paper, we apply the proposed method to various data sets and discuss some properties in choosing the number of principal components in principal component regression.

  • PDF

A Study on Selecting Principle Component Variables Using Adaptive Correlation (적응적 상관도를 이용한 주성분 변수 선정에 관한 연구)

  • Ko, Myung-Sook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.79-84
    • /
    • 2021
  • A feature extraction method capable of reflecting features well while mainaining the properties of data is required in order to process high-dimensional data. The principal component analysis method that converts high-level data into low-dimensional data and express high-dimensional data with fewer variables than the original data is a representative method for feature extraction of data. In this study, we propose a principal component analysis method based on adaptive correlation when selecting principal component variables in principal component analysis for data feature extraction when the data is high-dimensional. The proposed method analyzes the principal components of the data by adaptively reflecting the correlation based on the correlation between the input data. I want to exclude them from the candidate list. It is intended to analyze the principal component hierarchy by the eigen-vector coefficient value, to prevent the selection of the principal component with a low hierarchy, and to minimize the occurrence of data duplication inducing data bias through correlation analysis. Through this, we propose a method of selecting a well-presented principal component variable that represents the characteristics of actual data by reducing the influence of data bias when selecting the principal component variable.

Simple principal component analysis using Lasso (라소를 이용한 간편한 주성분분석)

  • Park, Cheolyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.533-541
    • /
    • 2013
  • In this study, a simple principal component analysis using Lasso is proposed. This method consists of two steps. The first step is to compute principal components by the principal component analysis. The second step is to regress each principal component on the original data matrix by Lasso regression method. Each of new principal components is computed as the linear combination of original data matrix using the scaled estimated Lasso regression coefficient as the coefficients of the combination. This method leads to easily interpretable principal components with more 0 coefficients by the properties of Lasso regression models. This is because the estimator of the regression of each principal component on the original data matrix is the corresponding eigenvector. This method is applied to real and simulated data sets with the help of an R package for Lasso regression and its usefulness is demonstrated.

Numerical Investigations in Choosing the Number of Principal Components in Principal Component Regression - CASE II

  • Shin, Jae-Kyoung;Moon, Sung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.1
    • /
    • pp.163-172
    • /
    • 1999
  • We propose a cross-validatory method for the choice of the number of principal components in principal component regression based on the magnitudes of correlations with y. There are two different manners in choosing principal components, one is the order of eigenvalues(Shin and Moon, 1997) and the other is that of correlations with y. We apply our method to various data sets and compare results of those two methods.

  • PDF

AN EFFICIENT ALGORITHM FOR SLIDING WINDOW BASED INCREMENTAL PRINCIPAL COMPONENTS ANALYSIS

  • Lee, Geunseop
    • Journal of the Korean Mathematical Society
    • /
    • v.57 no.2
    • /
    • pp.401-414
    • /
    • 2020
  • It is computationally expensive to compute principal components from scratch at every update or downdate when new data arrive and existing data are truncated from the data matrix frequently. To overcome this limitations, incremental principal component analysis is considered. Specifically, we present a sliding window based efficient incremental principal component computation from a covariance matrix which comprises of two procedures; simultaneous update and downdate of principal components, followed by the rank-one matrix update. Additionally we track the accurate decomposition error and the adaptive numerical rank. Experiments show that the proposed algorithm enables a faster execution speed and no-meaningful decomposition error differences compared to typical incremental principal component analysis algorithms, thereby maintaining a good approximation for the principal components.

Principal component regression for spatial data (공간자료 주성분분석)

  • Lim, Yaeji
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.3
    • /
    • pp.311-321
    • /
    • 2017
  • Principal component analysis is a popular statistical method to reduce the dimension of the high dimensional climate data and to extract meaningful climate patterns. Based on the principal component analysis, we can further apply a regression approach for the linear prediction of future climate, termed as principal component regression (PCR). In this paper, we develop a new PCR method based on the regularized principal component analysis for spatial data proposed by Wang and Huang (2016) to account spatial feature of the climate data. We apply the proposed method to temperature prediction in the East Asia region and compare the result with conventional PCR results.

New EM algorithm for Principal Component Analysis (주성분 분석을 위한 새로운 EM 알고리듬)

  • 안종훈;오종훈
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.529-531
    • /
    • 2001
  • We present an expectation-maximization algorithm for principal component analysis via orthogonalization. The algorithm finds actual principal components, whereas previously proposed EM algorithms can only find principal subspace. New algorithm is simple and more efficient thant probabilistic PCA specially in noiseless cases. Conventional PCA needs computation of inverse of the covariance matrices, which makes the algorithm prohibitively expensive when the dimensions of data space is large. This EM algorithm is very powerful for high dimensional data when only a few principal components are needed.

  • PDF

Application of the supplementary principal component analysis for the 1982-1992 Korean Pro Baseball data (89-92 한국 프로야구의 각 팀과 부문별 평균 성적에 대한 추가적 주성분분석의 응용)

  • 최용석;심희정
    • The Korean Journal of Applied Statistics
    • /
    • v.8 no.1
    • /
    • pp.51-60
    • /
    • 1995
  • Given an $n \times p$ data matrix, if we add the $p_s$ variables somewhat different nature than the p variables to this matrix, we have a new $n \times (p+p_s)$ data matrix. Because of these $p_s$ variables, the traditional principal component analysis can't provide its efficient results. In this study, to improve this problem we review the supplementary principal component analysis putting $p_s$ variables to supplementary variable. This technique is based on the algebraic and geometric aspects of the traditional principal component analysis. So we provide a type of statistical data analysis for the records of eight teams and fourteen fields of the 1982-1992 Korean Pro Baseball Data based on the supplementary principal component analysis and the traditional principal component analysis. And we compare the their results.

  • PDF

Principal Component Transformation of the Satellite Image Data and Principal-Components-Based Image Classification (위성 영상데이터의 주성분변환 및 주성분 기반 영상분류)

  • Seo, Yong-Su
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.24-33
    • /
    • 2004
  • Advances in remote sensing technologies are resulting in the rapid increase of the number of spectral channels, and thus, growing data volumes. This creates a need for developing faster techniques for processing such data. One application in which such fast processing is needed is the dimension reduction of the multispectral data. Principal component transformation is perhaps the mostpopular dimension reduction technique for multispectral data. In this paper, we discussed the processing procedures of principal component transformation. And we presented and discussed the results of the principal component transformation of the multispectral data. Moreover principal components image data are classified by the Maximum Likelihood method and Multilayer Perceptron method. In addition, the performances of two classification methods and data reduction effects are evaluated and analyzed based on the experimental results.

  • PDF