• 제목/요약/키워드: Principal component method

검색결과 982건 처리시간 0.022초

주성분분석에 의한 재래종 옥수수의 해석 (Assessment and Classification of Korean Indigenous Corn Lines by Application of Principal Component Analysis)

  • 이인섭;박종옥
    • 생명과학회지
    • /
    • 제13권3호
    • /
    • pp.343-348
    • /
    • 2003
  • 육종재료를 얻기 위하여 부산·경남지역에서 수집된 재래종 옥수수 49 계통을 선발하여 본 실험을 실시하였다. 본 시료는 주성분분석을 이용하여 재래종 옥수수를 해석하고 계통분류를 실시하였던 바 다음과 같은 결과를 얻었다. 7 개의 형질을 이용하여 실시한 주성분분석에서는 제 4주성분까지를 가지고 전체 변동의 86.3%를 설명할 수 있었고, 제 2 주성분까지는 전체 변동의 67.4%를 설명할 수 있었다. 주성분에 대한 형질들의 기여율은 형질에 따라 달랐고 상위 주성분에서 켰으며 하위 주성분에서 작았다. 주성분과 형질과의 상관계수는 주성분의 생물학적 의의와 주성분에 대응한 식물체의 형을 명확히 하였는데 제 1 주성분은 식물체의 크기 및 생장기간에 관련된 주성분이었고, 제2주성분은 이삭수와 분얼수에 관련된 주성분이었다. 제 3주성분과 제 4 주성분에서는 형질간에는 유의성이 인정되지 않았다.

A Penalized Principal Components using Probabilistic PCA

  • Park, Chong-Sun;Wang, Morgan
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2003년도 춘계 학술발표회 논문집
    • /
    • pp.151-156
    • /
    • 2003
  • Variable selection algorithm for principal component analysis using penalized likelihood method is proposed. We will adopt a probabilistic principal component idea to utilize likelihood function for the problem and use HARD penalty function to force coefficients of any irrelevant variables for each component to zero. Consistency and sparsity of coefficient estimates will be provided with results of small simulated and illustrative real examples.

  • PDF

주성분회귀분석을 이용한 한국프로야구 순위 (Predicting Korea Pro-Baseball Rankings by Principal Component Regression Analysis)

  • 배재영;이진목;이제영
    • Communications for Statistical Applications and Methods
    • /
    • 제19권3호
    • /
    • pp.367-379
    • /
    • 2012
  • 야구경기에서 순위를 예측하는 것은 야구팬들에게 관심의 대상이 된다. 이러한 순위를 예측하기 위해서 2011년 한국프로야구 기록 자료를 바탕으로 산술평균방법, 가중평균방법, 주성분분석방법, 주성분회귀분석 방법을 제시한다. 표준화를 통한 산술평균, 상관계수를 이용한 가중평균과 주성분 분석을 이용해서 순위를 예측하고, 최종모형으로 주성분회귀분석 모형이 선택되었다. 주성분 분석으로 축약된 변수를 이용해서 회귀분석을 실시하여, 투수부분, 타자부분, 투수와 타자부분의 순위예측 모형을 제안한다. 예측된 회귀모형을 통해서 2012년도 순위 예측이 가능하다.

Classification via principal differential analysis

  • Jang, Eunseong;Lim, Yaeji
    • Communications for Statistical Applications and Methods
    • /
    • 제28권2호
    • /
    • pp.135-150
    • /
    • 2021
  • We propose principal differential analysis based classification methods. Computations of squared multiple correlation function (RSQ) and principal differential analysis (PDA) scores are reviewed; in addition, we combine principal differential analysis results with the logistic regression for binary classification. In the numerical study, we compare the principal differential analysis based classification methods with functional principal component analysis based classification. Various scenarios are considered in a simulation study, and principal differential analysis based classification methods classify the functional data well. Gene expression data is considered for real data analysis. We observe that the PDA score based method also performs well.

화자식별을 위한 전역 공분산에 기반한 주성분분석 (Global Covariance based Principal Component Analysis for Speaker Identification)

  • 서창우;임영환
    • 말소리와 음성과학
    • /
    • 제1권1호
    • /
    • pp.69-73
    • /
    • 2009
  • This paper proposes an efficient global covariance-based principal component analysis (GCPCA) for speaker identification. Principal component analysis (PCA) is a feature extraction method which reduces the dimension of the feature vectors and the correlation among the feature vectors by projecting the original feature space into a small subspace through a transformation. However, it requires a larger amount of training data when performing PCA to find the eigenvalue and eigenvector matrix using the full covariance matrix by each speaker. The proposed method first calculates the global covariance matrix using training data of all speakers. It then finds the eigenvalue matrix and the corresponding eigenvector matrix from the global covariance matrix. Compared to conventional PCA and Gaussian mixture model (GMM) methods, the proposed method shows better performance while requiring less storage space and complexity in speaker identification.

  • PDF

Motion Recognition using Principal Component Analysis

  • Kwon, Yong-Man;Kim, Jong-Min
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권4호
    • /
    • pp.817-823
    • /
    • 2004
  • This paper describes a three dimensional motion recognition algorithm and a system which adopts the algorithm for non-contact human-computer interaction. From sequence of stereos images, five feature regions are extracted with simple color segmentation algorithm and then those are used for three dimensional locus calculation precess. However, the result is not so stable, noisy, that we introduce principal component analysis method to get more robust motion recognition results. This method can overcome the weakness of conventional algorithms since it directly uses three dimensional information motion recognition.

  • PDF

주요성분분석과 고정점 알고리즘 독립성분분석에 의한 얼굴인식 (Face Recognition by Using Principal Component Anaysis and Fixed-Point Independent Component Analysis)

  • 조용현
    • 한국산업융합학회 논문집
    • /
    • 제8권3호
    • /
    • pp.143-148
    • /
    • 2005
  • This paper presents a hybrid method for recognizing the faces by using principal component analysis(PCA) and fixed-point independent component analysis(FP-ICA). PCA is used to whiten the data, which reduces the effects of second-order statistics to the nonlinearities. FP-ICA is applied to extract the statistically independent features of face image. The proposed method has been applied to the problems for recognizing the 20 face images(10 persons * 2 scenes) of 324*243 pixels from Yale face database. The 3 distances such as city-block, Euclidean, negative angle are used as measures when match the probe images to the nearest gallery images. The experimental results show that the proposed method has a superior recognition performances(speed, rate). The negative angle has been relatively achieved more an accurate similarity than city-block or Euclidean.

  • PDF

Classification for intraclass correlation pattern by principal component analysis

  • Chung, Hie-Choon;Han, Chien-Pai
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권3호
    • /
    • pp.589-595
    • /
    • 2010
  • In discriminant analysis, we consider an intraclass correlation pattern by principal component analysis. We assume that the two populations are equally likely and the costs of misclassification are equal. In this situation, we consider two procedures, i.e., the test and proportion procedures, for selecting the principal components in classifica-tion. We compare the regular classification method and the proposed two procedures. We consider two methods for estimating error rate, i.e., the leave-one-out method and the bootstrap method.

계층적 벌점함수를 이용한 주성분분석 (Hierarchically penalized sparse principal component analysis)

  • 강종경;박재신;방성완
    • 응용통계연구
    • /
    • 제30권1호
    • /
    • pp.135-145
    • /
    • 2017
  • 주성분 분석(principal component analysis; PCA)은 서로 상관되어 있는 다변량 자료의 차원을 축소하는 대표적인 기법으로 많은 다변량 분석에서 활용되고 있다. 하지만 주성분은 모든 변수들의 선형결합으로 이루어지므로, 그 결과의 해석이 어렵다는 한계가 있다. sparse PCA(SPCA) 방법은 elastic net 형태의 벌점함수를 이용하여 보다 성긴(sparse) 적재를 가진 수정된 주성분을 만들어주지만, 변수들의 그룹구조를 이용하지 못한다는 한계가 있다. 이에 본 연구에서는 기존 SPCA를 개선하여, 자료가 그룹화되어 있는 경우에 유의한 그룹을 선택함과 동시에 그룹 내 불필요한 변수를 제거할 수 있는 새로운 주성분 분석 방법을 제시하고자 한다. 그룹과 그룹 내 변수 구조를 모형 적합에 이용하기 위하여, sparse 주성분 분석에서의 elastic net 벌점함수 대신에 계층적 벌점함수 형태를 고려하였다. 또한 실제 자료의 분석을 통해 제안 방법의 성능 및 유용성을 입증하였다.

Probabilistic penalized principal component analysis

  • Park, Chongsun;Wang, Morgan C.;Mo, Eun Bi
    • Communications for Statistical Applications and Methods
    • /
    • 제24권2호
    • /
    • pp.143-154
    • /
    • 2017
  • A variable selection method based on probabilistic principal component analysis (PCA) using penalized likelihood method is proposed. The proposed method is a two-step variable reduction method. The first step is based on the probabilistic principal component idea to identify principle components. The penalty function is used to identify important variables in each component. We then build a model on the original data space instead of building on the rotated data space through latent variables (principal components) because the proposed method achieves the goal of dimension reduction through identifying important observed variables. Consequently, the proposed method is of more practical use. The proposed estimators perform as the oracle procedure and are root-n consistent with a proper choice of regularization parameters. The proposed method can be successfully applied to high-dimensional PCA problems with a relatively large portion of irrelevant variables included in the data set. It is straightforward to extend our likelihood method in handling problems with missing observations using EM algorithms. Further, it could be effectively applied in cases where some data vectors exhibit one or more missing values at random.