• Title/Summary/Keyword: Principal Component Model

Search Result 469, Processing Time 0.025 seconds

Asymptotic Test for Dimensionality in Probabilistic Principal Component Analysis with Missing Values

  • Park, Chong-sun
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.1
    • /
    • pp.49-58
    • /
    • 2004
  • In this talk we proposed an asymptotic test for dimensionality in the latent variable model for probabilistic principal component analysis with missing values at random. Proposed algorithm is a sequential likelihood ratio test for an appropriate Normal latent variable model for the principal component analysis. Modified EM-algorithm is used to find MLE for the model parameters. Results from simulations and real data sets give us promising evidences that the proposed method is useful in finding necessary number of components in the principal component analysis with missing values at random.

Predicting Korea Pro-Baseball Rankings by Principal Component Regression Analysis (주성분회귀분석을 이용한 한국프로야구 순위)

  • Bae, Jae-Young;Lee, Jin-Mok;Lee, Jea-Young
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.367-379
    • /
    • 2012
  • In baseball rankings, prediction has been a subject of interest for baseball fans. To predict these rankings, (based on 2011 data from Korea Professional Baseball records) the arithmetic mean method, the weighted average method, principal component analysis, and principal component regression analysis is presented. By standardizing the arithmetic average, the correlation coefficient using the weighted average method, using principal components analysis to predict rankings, the final model was selected as a principal component regression model. By practicing regression analysis with a reduced variable by principal component analysis, we propose a rank predictability model of a pitcher part, a batter part and a pitcher batter part. We can estimate a 2011 rank of pro-baseball by a predicted regression model. By principal component regression analysis, the pitcher part, the other part, the pitcher and the batter part of the ranking prediction model is proposed. The regression model predicts the rankings for 2012.

Simple principal component analysis using Lasso (라소를 이용한 간편한 주성분분석)

  • Park, Cheolyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.3
    • /
    • pp.533-541
    • /
    • 2013
  • In this study, a simple principal component analysis using Lasso is proposed. This method consists of two steps. The first step is to compute principal components by the principal component analysis. The second step is to regress each principal component on the original data matrix by Lasso regression method. Each of new principal components is computed as the linear combination of original data matrix using the scaled estimated Lasso regression coefficient as the coefficients of the combination. This method leads to easily interpretable principal components with more 0 coefficients by the properties of Lasso regression models. This is because the estimator of the regression of each principal component on the original data matrix is the corresponding eigenvector. This method is applied to real and simulated data sets with the help of an R package for Lasso regression and its usefulness is demonstrated.

Bayesian Typhoon Track Prediction Using Wind Vector Data

  • Han, Minkyu;Lee, Jaeyong
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.3
    • /
    • pp.241-253
    • /
    • 2015
  • In this paper we predict the track of typhoons using a Bayesian principal component regression model based on wind field data. Data is obtained at each time point and we applied the Bayesian principal component regression model to conduct the track prediction based on the time point. Based on regression model, we applied to variable selection prior and two kinds of prior distribution; normal and Laplace distribution. We show prediction results based on Bayesian Model Averaging (BMA) estimator and Median Probability Model (MPM) estimator. We analysis 8 typhoons in 2006 using data obtained from previous 6 years (2000-2005). We compare our prediction results with a moving-nest typhoon model (MTM) proposed by the Korea Meteorological Administration. We posit that is possible to predict the track of a typhoon accurately using only a statistical model and without a dynamical model.

Incremental Eigenspace Model Applied To Kernel Principal Component Analysis

  • Kim, Byung-Joo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.345-354
    • /
    • 2003
  • An incremental kernel principal component analysis(IKPCA) is proposed for the nonlinear feature extraction from the data. The problem of batch kernel principal component analysis(KPCA) is that the computation becomes prohibitive when the data set is large. Another problem is that, in order to update the eigenvectors with another data, the whole eigenvectors should be recomputed. IKPCA overcomes this problem by incrementally updating the eigenspace model. IKPCA is more efficient in memory requirement than a batch KPCA and can be easily improved by re-learning the data. In our experiments we show that IKPCA is comparable in performance to a batch KPCA for the classification problem on nonlinear data set.

  • PDF

Global Covariance based Principal Component Analysis for Speaker Identification (화자식별을 위한 전역 공분산에 기반한 주성분분석)

  • Seo, Chang-Woo;Lim, Young-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.69-73
    • /
    • 2009
  • This paper proposes an efficient global covariance-based principal component analysis (GCPCA) for speaker identification. Principal component analysis (PCA) is a feature extraction method which reduces the dimension of the feature vectors and the correlation among the feature vectors by projecting the original feature space into a small subspace through a transformation. However, it requires a larger amount of training data when performing PCA to find the eigenvalue and eigenvector matrix using the full covariance matrix by each speaker. The proposed method first calculates the global covariance matrix using training data of all speakers. It then finds the eigenvalue matrix and the corresponding eigenvector matrix from the global covariance matrix. Compared to conventional PCA and Gaussian mixture model (GMM) methods, the proposed method shows better performance while requiring less storage space and complexity in speaker identification.

  • PDF

Estimation of S&T Knowledge Production Function Using Principal Component Regression Model (주성분 회귀모형을 이용한 과학기술 지식생산함수 추정)

  • Park, Su-Dong;Sung, Oong-Hyun
    • Journal of Korea Technology Innovation Society
    • /
    • v.13 no.2
    • /
    • pp.231-251
    • /
    • 2010
  • The numbers of SCI paper or patent in science and technology are expected to be related with the number of researcher and knowledge stock (R&D stock, paper stock, patent stock). The results of the regression model showed that severe multicollinearity existed and errors were made in the estimation and testing of regression coefficients. To solve the problem of multicollinearity and estimate the effect of the independent variable properly, principal component regression model were applied for three cases with S&T knowledge production. The estimated principal component regression function was transformed into original independent variables to interpret properly its effect. The analysis indicated that the principal component regression model was useful to estimate the effect of the highly correlate production factors and showed that the number of researcher, R&D stock, paper or patent stock had all positive effect on the production of paper or patent.

  • PDF

Equivalence study of canonical correspondence analysis by weighted principal component analysis and canonical correspondence analysis by Gaussian response model (가중주성분분석을 활용한 정준대응분석과 가우시안 반응 모형에 의한 정준대응분석의 동일성 연구)

  • Jeong, Hyeong Chul
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.6
    • /
    • pp.945-956
    • /
    • 2021
  • In this study, we considered the algorithm of Legendre and Legendre (2012), which derives canonical correspondence analysis from weighted principal component analysis. And, it was proved that the canonical correspondence analysis based on the weighted principal component analysis is exactly the same as Ter Braak's (1986) canonical correspondence analysis based on the Gaussian response model. Ter Braak (1986)'s canonical correspondence analysis derived from a Gaussian response curve that can explain the abundance of species in ecology well uses the basic assumption of the species packing model and then conducts generalized linear model and canonical correlation analysis. It is derived by way of binding. However, the algorithm of Legendre and Legendre (2012) is calculated in a method quite similar to Benzecri's correspondence analysis without such assumptions. Therefore, if canonical correspondence analysis based on weighted principal component analysis is used, it is possible to have some flexibility in using the results. In conclusion, this study shows that the two methods starting from different models have the same site scores, species scores, and species-environment correlations.

Comprehensive studies of Grassmann manifold optimization and sequential candidate set algorithm in a principal fitted component model

  • Chaeyoung, Lee;Jae Keun, Yoo
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.721-733
    • /
    • 2022
  • In this paper we compare parameter estimation by Grassmann manifold optimization and sequential candidate set algorithm in a structured principal fitted component (PFC) model. The structured PFC model extends the form of the covariance matrix of a random error to relieve the limits that occur due to too simple form of the matrix. However, unlike other PFC models, structured PFC model does not have a closed form for parameter estimation in dimension reduction which signals the need of numerical computation. The numerical computation can be done through Grassmann manifold optimization and sequential candidate set algorithm. We conducted numerical studies to compare the two methods by computing the results of sequential dimension testing and trace correlation values where we can compare the performance in determining dimension and estimating the basis. We could conclude that Grassmann manifold optimization outperforms sequential candidate set algorithm in dimension determination, while sequential candidate set algorithm is better in basis estimation when conducting dimension reduction. We also applied the methods in real data which derived the same result.

A STUDY ON PREDICTION INTERVALS, FACTOR ANALYSIS MODELS AND HIGH-DIMENSIONAL EMPIRICAL LINEAR PREDICTION

  • Jee, Eun-Sook
    • Journal of applied mathematics & informatics
    • /
    • v.14 no.1_2
    • /
    • pp.377-386
    • /
    • 2004
  • A technique that provides prediction intervals based on a model called an empirical linear model is discussed. The technique, high-dimensional empirical linear prediction (HELP), involves principal component analysis, factor analysis and model selection. HELP can be viewed as a technique that provides prediction (and confidence) intervals based on a factor analysis models do not typically have justifiable theory due to nonidentifiability, we show that the intervals are justifiable asymptotically.