• Title/Summary/Keyword: Component Analysis(PCA)

Search Result 1,378, Processing Time 0.027 seconds

Face Recognition using Modified Local Directional Pattern Image (Modified Local Directional Pattern 영상을 이용한 얼굴인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.205-208
    • /
    • 2013
  • Generally, binary pattern transforms have been used in the field of the face recognition and facial expression, since they are robust to illumination. Thus, this paper proposes an illumination-robust face recognition system combining an MLDP, which improves the texture component of the LDP, and a 2D-PCA algorithm. Unlike that binary pattern transforms such as LBP and LDP were used to extract histogram features, the proposed method directly uses the MLDP image for feature extraction by 2D-PCA. The performance evaluation of proposed method was carried out using various algorithms such as PCA, 2D-PCA and Gabor wavelets-based LBP on Yale B and CMU-PIE databases which were constructed under varying lighting condition. From the experimental results, we confirmed that the proposed method showed the best recognition accuracy.

ImprovementofMLLRAlgorithmforRapidSpeakerAdaptationandReductionofComputation (빠른 화자 적응과 연산량 감소를 위한 MLLR알고리즘 개선)

  • Kim, Ji-Un;Chung, Jae-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.65-71
    • /
    • 2004
  • We improved the MLLR speaker adaptation algorithm with reduction of the order of HMM parameters using PCA(Principle Component Analysis) or ICA(Independent Component Analysis). To find a smaller set of variables with less redundancy, we adapt PCA(principal component analysis) and ICA(independent component analysis) that would give as good a representation as possible, minimize the correlations between data elements, and remove the axis with less covariance or higher-order statistical independencies. Ordinary MLLR algorithm needs more than 30 seconds adaptation data to represent higher word recognition rate of SD(Speaker Dependent) models than of SI(Speaker Independent) models, whereas proposed algorithm needs just more than 10 seconds adaptation data. 10 components for ICA and PCA represent similar performance with 36 components for ordinary MLLR framework. So, compared with ordinary MLLR algorithm, the amount of total computation requested in speaker adaptation is reduced by about 1/167 in proposed MLLR algorithm.

Face recognition rate comparison using Principal Component Analysis in Wavelet compression image (Wavelet 압축 영상에서 PCA를 이용한 얼굴 인식률 비교)

  • 박장한;남궁재찬
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.5
    • /
    • pp.33-40
    • /
    • 2004
  • In this paper, we constructs face database by using wavelet comparison, and compare face recognition rate by using principle component analysis (Principal Component Analysis : PCA) algorithm. General face recognition method constructs database, and do face recognition by using normalized size. Proposed method changes image of normalized size (92${\times}$112) to 1 step, 2 step, 3 steps to wavelet compression and construct database. Input image did compression by wavelet and a face recognition experiment by PCA algorithm. As well as method that is proposed through an experiment reduces existing face image's information, the processing speed improved. Also, original image of proposed method showed recognition rate about 99.05%, 1 step 99.05%, 2 step 98.93%, 3 steps 98.54%, and showed that is possible to do face recognition constructing face database of large quantity.

Risk Evaluation of Slope Using Principal Component Analysis (PCA) (주성분분석을 이용한 사면의 위험성 평가)

  • Jung, Soo-Jung;Kim, -Yong-Soo;Kim, Tae-Hyung
    • Journal of the Korean Geotechnical Society
    • /
    • v.26 no.10
    • /
    • pp.69-79
    • /
    • 2010
  • To detect abnormal events in slopes, Principal Component Analysis (PCA) is applied to the slope that was collapsed during monitoring. Principal component analysis is a kind of statical methods and is called non-parametric modeling. In this analysis, principal component score indicates an abnormal behavior of slope. In an abnormal event, principal component score is relatively higher or lower compared to a normal situation so that there is a big score change in the case of abnormal. The results confirm that the abnormal events and collapses of slope were detected by using principal component analysis. It could be possible to predict quantitatively the slope behavior and abnormal events using principal component analysis.

Blind Source Separation via Principal Component Analysis

  • Choi, Seung-Jin
    • Journal of KIEE
    • /
    • v.11 no.1
    • /
    • pp.1-7
    • /
    • 2001
  • Various methods for blind source separation (BSS) are based on independent component analysis (ICA) which can be viewed as a nonlinear extension of principal component analysis (PCA). Most existing ICA methods require certain nonlinear functions (which leads to higher-order statistics) depending on the probability distributions of sources, whereas PCA is a linear learning method based on second-order statistics. In this paper we show that the PCA can be applied to the task of BBS, provided that source are spatially uncorrelated but temporally correlated. Since the resulting method is based on only second-order statistics, it avoids the nonlinear function and is able to separate mixtures of several colored Gaussian sources, in contrast to the conventional ICA methods.

  • PDF

Features for Figure Speech Recognition in Noise Environment (잡음환경에서의 숫자음 인식을 위한 특징파라메타)

  • Lee, Jae-Ki;Koh, Si-Young;Lee, Kwang-Suk;Hur, Kang-In
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.473-476
    • /
    • 2005
  • This paper is proposed a robust various feature parameters in noise. Feature parameter MFCC(Mel Frequency Cepstral Coefficient) used in conventional speech recognition shows good performance. But, parameter transformed feature space that uses PCA(Principal Component Analysis)and ICA(Independent Component Analysis) that is algorithm transformed parameter MFCC's feature space that use in old for more robust performance in noise is compared with the conventional parameter MFCC's performance. The result shows more superior performance than parameter and MFCC that feature parameter transformed by the result ICA is transformed by PCA.

  • PDF

Principal component analysis in the frequency domain: a review and their application to climate data (주파수공간에서의 주성분분석: 리뷰와 기상자료에의 적용)

  • Jo, You-Jung;Oh, Hee-Seok;Lim, Yaeji
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.3
    • /
    • pp.441-451
    • /
    • 2017
  • In this paper, we review principal component analysis (PCA) procedures in the frequency domain and apply them to analyze sea surface temperature data. The classical PCA defined in the time domain is a popular dimension reduction technique. Extending the conventional PCA to the frequency domain makes it possible to define PCA in the frequency domain, which is useful for dimension reduction as well as a feature extraction of multiple time series. We focus on two PCA methods in the frequency domain, Hilbert PCA (HPCA) and frequency domain PCA (FDPCA). We review these two PCAs in order for potential readers to easily understand insights as well as perform a numerical study for comparison with conventional PCA. Furthermore, we apply PCA methods in the frequency domain to sea surface temperature data on the tropical Pacific Ocean. Results from numerical experiments demonstrate that PCA in the frequency domain is effective for the analysis of time series data.

Modified Recursive PC (수정된 반복 주성분 분석 기법에 대한 연구)

  • Kim, Dong-Gyu;Kim, Ah-Hyoun;Kim, Hyun-Joong
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.963-977
    • /
    • 2011
  • PCA(Principal Component Analysis) is a well-studied statistical technique and an important tool for handling multivariate data. Although many algorithms exist for PCA, most of them are unsuitable for real time applications or high dimensional problems. Since it is desirable to avoid extensive matrix operations in such cases, alternative solutions are required to calculate the eigenvalues and eigenvectors of the sample covariance matrix. Erdogmus et al. (2004) proposed Recursive PCA(RPCA), which is a fast adaptive on-line solution for PCA, based on the first order perturbation theory. It facilitates the real-time implementation of PCA by recursively approximating updated eigenvalues and eigenvectors. However, the performance of the RPCA method becomes questionable as the size of newly-added data increases. In this paper, we modified the RPCA method by taking advantage of the mathematical relation of eigenvalues and eigenvectors of sample covariance matrix. We compared the performance of the proposed algorithm with that of RPCA, and found that the accuracy of the proposed method remarkably improved.

Hierarchically penalized sparse principal component analysis (계층적 벌점함수를 이용한 주성분분석)

  • Kang, Jongkyeong;Park, Jaeshin;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.135-145
    • /
    • 2017
  • Principal component analysis (PCA) describes the variation of multivariate data in terms of a set of uncorrelated variables. Since each principal component is a linear combination of all variables and the loadings are typically non-zero, it is difficult to interpret the derived principal components. Sparse principal component analysis (SPCA) is a specialized technique using the elastic net penalty function to produce sparse loadings in principal component analysis. When data are structured by groups of variables, it is desirable to select variables in a grouped manner. In this paper, we propose a new PCA method to improve variable selection performance when variables are grouped, which not only selects important groups but also removes unimportant variables within identified groups. To incorporate group information into model fitting, we consider a hierarchical lasso penalty instead of the elastic net penalty in SPCA. Real data analyses demonstrate the performance and usefulness of the proposed method.