• Title/Summary/Keyword: Principal Dimension

Search Result 206, Processing Time 0.018 seconds

MBRDR: R-package for response dimension reduction in multivariate regression

  • Heesung Ahn;Jae Keun Yoo
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.179-189
    • /
    • 2024
  • In multivariate regression with a high-dimensional response Y ∈ ℝr and a relatively low-dimensional predictor X ∈ ℝp (where r ≥ 2), the statistical analysis of such data presents significant challenges due to the exponential increase in the number of parameters as the dimension of the response grows. Most existing dimension reduction techniques primarily focus on reducing the dimension of the predictors (X), not the dimension of the response variable (Y). Yoo and Cook (2008) introduced a response dimension reduction method that preserves information about the conditional mean E(Y | X). Building upon this foundational work, Yoo (2018) proposed two semi-parametric methods, principal response reduction (PRR) and principal fitted response reduction (PFRR), then expanded these methods to unstructured principal fitted response reduction (UPFRR) (Yoo, 2019). This paper reviews these four response dimension reduction methodologies mentioned above. In addition, it introduces the implementation of the mbrdr package in R. The mbrdr is a unique tool in the R community, as it is specifically designed for response dimension reduction, setting it apart from existing dimension reduction packages that focus solely on predictors.

Principal Component Regression by Principal Component Selection

  • Lee, Hosung;Park, Yun Mi;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.2
    • /
    • pp.173-180
    • /
    • 2015
  • We propose a selection procedure of principal components in principal component regression. Our method selects principal components using variable selection procedures instead of a small subset of major principal components in principal component regression. Our procedure consists of two steps to improve estimation and prediction. First, we reduce the number of principal components using the conventional principal component regression to yield the set of candidate principal components and then select principal components among the candidate set using sparse regression techniques. The performance of our proposals is demonstrated numerically and compared with the typical dimension reduction approaches (including principal component regression and partial least square regression) using synthetic and real datasets.

Comprehensive studies of Grassmann manifold optimization and sequential candidate set algorithm in a principal fitted component model

  • Chaeyoung, Lee;Jae Keun, Yoo
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.6
    • /
    • pp.721-733
    • /
    • 2022
  • In this paper we compare parameter estimation by Grassmann manifold optimization and sequential candidate set algorithm in a structured principal fitted component (PFC) model. The structured PFC model extends the form of the covariance matrix of a random error to relieve the limits that occur due to too simple form of the matrix. However, unlike other PFC models, structured PFC model does not have a closed form for parameter estimation in dimension reduction which signals the need of numerical computation. The numerical computation can be done through Grassmann manifold optimization and sequential candidate set algorithm. We conducted numerical studies to compare the two methods by computing the results of sequential dimension testing and trace correlation values where we can compare the performance in determining dimension and estimating the basis. We could conclude that Grassmann manifold optimization outperforms sequential candidate set algorithm in dimension determination, while sequential candidate set algorithm is better in basis estimation when conducting dimension reduction. We also applied the methods in real data which derived the same result.

A concise overview of principal support vector machines and its generalization

  • Jungmin Shin;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.235-246
    • /
    • 2024
  • In high-dimensional data analysis, sufficient dimension reduction (SDR) has been considered as an attractive tool for reducing the dimensionality of predictors while preserving regression information. The principal support vector machine (PSVM) (Li et al., 2011) offers a unified approach for both linear and nonlinear SDR. This article comprehensively explores a variety of SDR methods based on the PSVM, which we call principal machines (PM) for SDR. The PM achieves SDR by solving a sequence of convex optimizations akin to popular supervised learning methods, such as the support vector machine, logistic regression, and quantile regression, to name a few. This makes the PM straightforward to handle and extend in both theoretical and computational aspects, as we will see throughout this article.

Performance Improvement of Polynomial Adaline by Using Dimension Reduction of Independent Variables (독립변수의 차원감소에 의한 Polynomial Adaline의 성능개선)

  • Cho, Yong-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.5 no.1
    • /
    • pp.33-38
    • /
    • 2002
  • This paper proposes an efficient method for improving the performance of polynomial adaline using the dimension reduction of independent variables. The adaptive principal component analysis is applied for reducing the dimension by extracting efficiently the features of the given independent variables. It can be solved the problems due to high dimensional input data in the polynomial adaline that the principal component analysis converts input data into set of statistically independent features. The proposed polynomial adaline has been applied to classify the patterns. The simulation results shows that the proposed polynomial adaline has better performances of the classification for test patterns, in comparison with those using the conventional polynomial adaline. Also, it is affected less by the scope of the smoothing factor.

  • PDF

Principal Component Transformation of the Satellite Image Data and Principal-Components-Based Image Classification (위성 영상데이터의 주성분변환 및 주성분 기반 영상분류)

  • Seo, Yong-Su
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.24-33
    • /
    • 2004
  • Advances in remote sensing technologies are resulting in the rapid increase of the number of spectral channels, and thus, growing data volumes. This creates a need for developing faster techniques for processing such data. One application in which such fast processing is needed is the dimension reduction of the multispectral data. Principal component transformation is perhaps the mostpopular dimension reduction technique for multispectral data. In this paper, we discussed the processing procedures of principal component transformation. And we presented and discussed the results of the principal component transformation of the multispectral data. Moreover principal components image data are classified by the Maximum Likelihood method and Multilayer Perceptron method. In addition, the performances of two classification methods and data reduction effects are evaluated and analyzed based on the experimental results.

  • PDF

PCA-SVM Based Vehicle Color Recognition (PCA-SVM 기법을 이용한 차량의 색상 인식)

  • Park, Sun-Mi;Kim, Ku-Jin
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.285-292
    • /
    • 2008
  • Color histograms have been used as feature vectors to characterize the color features of given images, but they have a limitation in efficiency by generating high-dimensional feature vectors. In this paper, we present a method to reduce the dimension of the feature vectors by applying PCA (principal components analysis) to the color histogram of a given vehicle image. With SVM (support vector machine) method, the dimension-reduced feature vectors are used to recognize the colors of vehicles. After reducing the dimension of the feature vector by a factor of 32, the successful recognition rate is reduced only 1.42% compared to the case when we use original feature vectors. Moreover, the computation time for the color recognition is reduced by a factor of 31, so we could recognize the colors efficiently.

Resistant Singular Value Decomposition and Its Statistical Applications

  • Park, Yong-Seok;Huh, Myung-Hoe
    • Journal of the Korean Statistical Society
    • /
    • v.25 no.1
    • /
    • pp.49-66
    • /
    • 1996
  • The singular value decomposition is one of the most useful methods in the area of matrix computation. It gives dimension reduction which is the centeral idea in many multivariate analyses. But this method is not resistant, i.e., it is very sensitive to small changes in the input data. In this article, we derive the resistant version of singular value decomposition for principal component analysis. And we give its statistical applications to biplot which is similar to principal component analysis in aspects of the dimension reduction of an n x p data matrix. Therefore, we derive the resistant principal component analysis and biplot based on the resistant singular value decomposition. They provide graphical multivariate data analyses relatively little influenced by outlying observations.

  • PDF

Comparison of Methods for Reducing the Dimension of Compositional Data with Zero Values

  • Song, Taeg-Youn;Choi, Byung-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.4
    • /
    • pp.559-569
    • /
    • 2012
  • Compositional data consist of compositions that are non-negative vectors of proportions with the unit-sum constraint. In disciplines such as petrology and archaeometry, it is fundamental to statistically analyze this type of data. Aitchison (1983) introduced a log-contrast principal component analysis that involves logratio transformed data, as a dimension-reduction technique to understand and interpret the structure of compositional data. However, the analysis is not usable when zero values are present in the data. In this paper, we introduce 4 possible methods to reduce the dimension of compositional data with zero values. Two real data sets are analyzed using the methods and the obtained results are compared.

A Classification Method Using Data Reduction

  • Uhm, Daiho;Jun, Sung-Hae;Lee, Seung-Joo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.1
    • /
    • pp.1-5
    • /
    • 2012
  • Data reduction has been used widely in data mining for convenient analysis. Principal component analysis (PCA) and factor analysis (FA) methods are popular techniques. The PCA and FA reduce the number of variables to avoid the curse of dimensionality. The curse of dimensionality is to increase the computing time exponentially in proportion to the number of variables. So, many methods have been published for dimension reduction. Also, data augmentation is another approach to analyze data efficiently. Support vector machine (SVM) algorithm is a representative technique for dimension augmentation. The SVM maps original data to a feature space with high dimension to get the optimal decision plane. Both data reduction and augmentation have been used to solve diverse problems in data analysis. In this paper, we compare the strengths and weaknesses of dimension reduction and augmentation for classification and propose a classification method using data reduction for classification. We will carry out experiments for comparative studies to verify the performance of this research.