• Title/Summary/Keyword: Principal Dimension

Search Result 206, Processing Time 0.029 seconds

Principal component regression for spatial data (공간자료 주성분분석)

  • Lim, Yaeji
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.3
    • /
    • pp.311-321
    • /
    • 2017
  • Principal component analysis is a popular statistical method to reduce the dimension of the high dimensional climate data and to extract meaningful climate patterns. Based on the principal component analysis, we can further apply a regression approach for the linear prediction of future climate, termed as principal component regression (PCR). In this paper, we develop a new PCR method based on the regularized principal component analysis for spatial data proposed by Wang and Huang (2016) to account spatial feature of the climate data. We apply the proposed method to temperature prediction in the East Asia region and compare the result with conventional PCR results.

SVM-Guided Biplot of Observations and Variables

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.6
    • /
    • pp.491-498
    • /
    • 2013
  • We consider support vector machines(SVM) to predict Y with p numerical variables $X_1$, ${\ldots}$, $X_p$. This paper aims to build a biplot of p explanatory variables, in which the first dimension indicates the direction of SVM classification and/or regression fits. We use the geometric scheme of kernel principal component analysis adapted to map n observations on the two-dimensional projection plane of which one axis is determined by a SVM model a priori.

ECG based Personal Authentication using Principal Component Analysis (주성분 분석기법을 이용한 심전도 기반 개인인증)

  • Cho, Ju-Hee;Cho, Byeong-Jun;Lee, Dae-Jong;Chun, Myung-Geun
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.66 no.4
    • /
    • pp.258-262
    • /
    • 2017
  • The PCA(Principal Component Analysis) algorithm is widely used as a technique of expressing the eigenvectors of the covariance matrix that best represents the characteristics of the data and reducing the high dimensional vector to a low dimensional vector. In this paper, we have developed a personal authentication method based on ECG using principal component analysis. The proposed method showed excellent recognition performance of 98.2 [%] when it was experimented using electrocardiogram data obtained at weekly intervals. Therefore, it can be seen that it is useful for personal authentication by reducing the dimension without changing the information on the variability and the correlation set variable existing in the electrocardiogram data by using the principal component analysis technique.

Global Covariance based Principal Component Analysis for Speaker Identification (화자식별을 위한 전역 공분산에 기반한 주성분분석)

  • Seo, Chang-Woo;Lim, Young-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.69-73
    • /
    • 2009
  • This paper proposes an efficient global covariance-based principal component analysis (GCPCA) for speaker identification. Principal component analysis (PCA) is a feature extraction method which reduces the dimension of the feature vectors and the correlation among the feature vectors by projecting the original feature space into a small subspace through a transformation. However, it requires a larger amount of training data when performing PCA to find the eigenvalue and eigenvector matrix using the full covariance matrix by each speaker. The proposed method first calculates the global covariance matrix using training data of all speakers. It then finds the eigenvalue matrix and the corresponding eigenvector matrix from the global covariance matrix. Compared to conventional PCA and Gaussian mixture model (GMM) methods, the proposed method shows better performance while requiring less storage space and complexity in speaker identification.

  • PDF

A Study of Singular Value Decomposition in Data Reduction techniques

  • Shin, Yang-Kyu
    • Journal of the Korean Data and Information Science Society
    • /
    • v.9 no.1
    • /
    • pp.63-70
    • /
    • 1998
  • The singular value decomposition is a tool which is used to find a linear structure of reduced dimension and to give interpretation of the lower dimensional structure about multivariate data. In this paper the singular value decomposition is reviewed from both algebraic and geometric point of view and, is illustrated the way which the tool is used in the multivariate techniques finding a simpler geometric structure for the data.

  • PDF

DR-LSTM: Dimension reduction based deep learning approach to predict stock price

  • Ah-ram Lee;Jae Youn Ahn;Ji Eun Choi;Kyongwon Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.213-234
    • /
    • 2024
  • In recent decades, increasing research attention has been directed toward predicting the price of stocks in financial markets using deep learning methods. For instance, recurrent neural network (RNN) is known to be competitive for datasets with time-series data. Long short term memory (LSTM) further improves RNN by providing an alternative approach to the gradient loss problem. LSTM has its own advantage in predictive accuracy by retaining memory for a longer time. In this paper, we combine both supervised and unsupervised dimension reduction methods with LSTM to enhance the forecasting performance and refer to this as a dimension reduction based LSTM (DR-LSTM) approach. For a supervised dimension reduction method, we use methods such as sliced inverse regression (SIR), sparse SIR, and kernel SIR. Furthermore, principal component analysis (PCA), sparse PCA, and kernel PCA are used as unsupervised dimension reduction methods. Using datasets of real stock market index (S&P 500, STOXX Europe 600, and KOSPI), we present a comparative study on predictive accuracy between six DR-LSTM methods and time series modeling.

Comparative Study of Dimension Reduction Methods for Highly Imbalanced Overlapping Churn Data

  • Lee, Sujee;Koo, Bonhyo;Jung, Kyu-Hwan
    • Industrial Engineering and Management Systems
    • /
    • v.13 no.4
    • /
    • pp.454-462
    • /
    • 2014
  • Retention of possible churning customer is one of the most important issues in customer relationship management, so companies try to predict churn customers using their large-scale high-dimensional data. This study focuses on dealing with large data sets by reducing the dimensionality. By using six different dimension reduction methods-Principal Component Analysis (PCA), factor analysis (FA), locally linear embedding (LLE), local tangent space alignment (LTSA), locally preserving projections (LPP), and deep auto-encoder-our experiments apply each dimension reduction method to the training data, build a classification model using the mapped data and then measure the performance using hit rate to compare the dimension reduction methods. In the result, PCA shows good performance despite its simplicity, and the deep auto-encoder gives the best overall performance. These results can be explained by the characteristics of the churn prediction data that is highly correlated and overlapped over the classes. We also proposed a simple out-of-sample extension method for the nonlinear dimension reduction methods, LLE and LTSA, utilizing the characteristic of the data.

Abnormality Detection to Non-linear Multivariate Process Using Supervised Learning Methods (지도학습기법을 이용한 비선형 다변량 공정의 비정상 상태 탐지)

  • Son, Young-Tae;Yun, Deok-Kyun
    • IE interfaces
    • /
    • v.24 no.1
    • /
    • pp.8-14
    • /
    • 2011
  • Principal Component Analysis (PCA) reduces the dimensionality of the process by creating a new set of variables, Principal components (PCs), which attempt to reflect the true underlying process dimension. However, for highly nonlinear processes, this form of monitoring may not be efficient since the process dimensionality can't be represented by a small number of PCs. Examples include the process of semiconductors, pharmaceuticals and chemicals. Nonlinear correlated process variables can be reduced to a set of nonlinear principal components, through the application of Kernel Principal Component Analysis (KPCA). Support Vector Data Description (SVDD) which has roots in a supervised learning theory is a training algorithm based on structural risk minimization. Its control limit does not depend on the distribution, but adapts to the real data. So, in this paper proposes a non-linear process monitoring technique based on supervised learning methods and KPCA. Through simulated examples, it has been shown that the proposed monitoring chart is more effective than $T^2$ chart for nonlinear processes.

A Study of the Characteristics of Various Board Shapes for Use in the Development of Public Windsurfing Equipment (보급형 윈드서핑 장비 개발을 위한 보드형상 특성 연구)

  • Im, Jang-Gon;Suh, Sung-Bu
    • Journal of Ocean Engineering and Technology
    • /
    • v.24 no.4
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, the shapes of windsurfing boards are proposed for the promotion of their public utilization. Initially, we investigated the principal dimensions of 1,500 windsurfing boards that were produced in the last six years to categorize the characteristics of the boards. Then, model tests were performed in a circulating water channel to determine the resistance characteristics and the flow phenomena, including the wetness of the decks. After analyzing the principal dimensions and the results of the tests of existing windsurfing boards, we proposed four public board shapes that resulted from changing the shapes of the nose and rail and protecting the deck of free-ride boards from wetness. Finally, we predicted the performance of the four proposed windsurfing boards.

A Fuzzy Neural Network Combining Wavelet Denoising and PCA for Sensor Signal Estimation

  • Na, Man-Gyun
    • Nuclear Engineering and Technology
    • /
    • v.32 no.5
    • /
    • pp.485-494
    • /
    • 2000
  • In this work, a fuzzy neural network is used to estimate the relevant sensor signal using other sensor signals. Noise components in input signals into the fuzzy neural network are removed through the wavelet denoising technique . Principal component analysis (PCA) is used to reduce the dimension of an input space without losing a significant amount of information. A lower dimensional input space will also usually reduce the time necessary to train a fuzzy-neural network. Also, the principal component analysis makes easy the selection of the input signals into the fuzzy neural network. The fuzzy neural network parameters are optimized by two learning methods. A genetic algorithm is used to optimize the antecedent parameters of the fuzzy neural network and a least-squares algorithm is used to solve the consequent parameters. The proposed algorithm was verified through the application to the pressurizer water level and the hot-leg flowrate measurements in pressurized water reactors.

  • PDF