• Title/Summary/Keyword: Linear feature analysis

Search Result 325, Processing Time 0.035 seconds

The Application of SVD for Feature Extraction (특징추출을 위한 특이값 분할법의 응용)

  • Lee Hyun-Seung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.82-86
    • /
    • 2006
  • The design of a pattern recognition system generally involves the three aspects: preprocessing, feature extraction, and decision making. Among them, a feature extraction method determines an appropriate subspace of dimensionality in the original feature space of dimensionality so that it can reduce the complexity of the system and help to improve successful recognition rates. Linear transforms, such as principal component analysis, factor analysis, and linear discriminant analysis have been widely used in pattern recognition for feature extraction. This paper shows that singular value decomposition (SVD) can be applied usefully in feature extraction stage of pattern recognition. As an application, a remote sensing problem is applied to verify the usefulness of SVD. The experimental result indicates that the feature extraction using SVD can improve the recognition rate about 25% compared with that of PCA.

Non-linear PLS based on non-linear principal component analysis and neural network (비선형 주성분해석과 신경망에 기반한 비선형 PLS)

  • 손정현;정신호;송상옥;윤인섭
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.394-394
    • /
    • 2000
  • This Paper proposes a new nonlinear partial least square method that extends the linear PLS. Proposed nonlinear PLS uses self-organizing feature map as PLS outer relation and multilayer neural network as PLS inner regression method.

  • PDF

Design of Lazy Classifier based on Fuzzy k-Nearest Neighbors and Reconstruction Error (퍼지 k-Nearest Neighbors 와 Reconstruction Error 기반 Lazy Classifier 설계)

  • Roh, Seok-Beom;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.101-108
    • /
    • 2010
  • In this paper, we proposed a new lazy classifier with fuzzy k-nearest neighbors approach and feature selection which is based on reconstruction error. Reconstruction error is the performance index for locally linear reconstruction. When a new query point is given, fuzzy k-nearest neighbors approach defines the local area where the local classifier is available and assigns the weighting values to the data patterns which are involved within the local area. After defining the local area and assigning the weighting value, the feature selection is carried out to reduce the dimension of the feature space. When some features are selected in terms of the reconstruction error, the local classifier which is a sort of polynomial is developed using weighted least square estimation. In addition, the experimental application covers a comparative analysis including several previously commonly encountered methods such as standard neural networks, support vector machine, linear discriminant analysis, and C4.5 trees.

The extension of the largest generalized-eigenvalue based distance metric Dij1) in arbitrary feature spaces to classify composite data points

  • Daoud, Mosaab
    • Genomics & Informatics
    • /
    • v.17 no.4
    • /
    • pp.39.1-39.20
    • /
    • 2019
  • Analyzing patterns in data points embedded in linear and non-linear feature spaces is considered as one of the common research problems among different research areas, for example: data mining, machine learning, pattern recognition, and multivariate analysis. In this paper, data points are heterogeneous sets of biosequences (composite data points). A composite data point is a set of ordinary data points (e.g., set of feature vectors). We theoretically extend the derivation of the largest generalized eigenvalue-based distance metric Dij1) in any linear and non-linear feature spaces. We prove that Dij1) is a metric under any linear and non-linear feature transformation function. We show the sufficiency and efficiency of using the decision rule $\bar{{\delta}}_{{\Xi}i}$(i.e., mean of Dij1)) in classification of heterogeneous sets of biosequences compared with the decision rules min𝚵iand median𝚵i. We analyze the impact of linear and non-linear transformation functions on classifying/clustering collections of heterogeneous sets of biosequences. The impact of the length of a sequence in a heterogeneous sequence-set generated by simulation on the classification and clustering results in linear and non-linear feature spaces is empirically shown in this paper. We propose a new concept: the limiting dispersion map of the existing clusters in heterogeneous sets of biosequences embedded in linear and nonlinear feature spaces, which is based on the limiting distribution of nucleotide compositions estimated from real data sets. Finally, the empirical conclusions and the scientific evidences are deduced from the experiments to support the theoretical side stated in this paper.

An Improved method of Two Stage Linear Discriminant Analysis

  • Chen, Yarui;Tao, Xin;Xiong, Congcong;Yang, Jucheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1243-1263
    • /
    • 2018
  • The two-stage linear discrimination analysis (TSLDA) is a feature extraction technique to solve the small size sample problem in the field of image recognition. The TSLDA has retained all subspace information of the between-class scatter and within-class scatter. However, the feature information in the four subspaces may not be entirely beneficial for classification, and the regularization procedure for eliminating singular metrics in TSLDA has higher time complexity. In order to address these drawbacks, this paper proposes an improved two-stage linear discriminant analysis (Improved TSLDA). The Improved TSLDA proposes a selection and compression method to extract superior feature information from the four subspaces to constitute optimal projection space, where it defines a single Fisher criterion to measure the importance of single feature vector. Meanwhile, Improved TSLDA also applies an approximation matrix method to eliminate the singular matrices and reduce its time complexity. This paper presents comparative experiments on five face databases and one handwritten digit database to validate the effectiveness of the Improved TSLDA.

A Study on Feature Projection Methods for a Real-Time EMG Pattern Recognition (실시간 근전도 패턴인식을 위한 특징투영 기법에 관한 연구)

  • Chu, Jun-Uk;Kim, Shin-Ki;Mun, Mu-Seong;Moon, In-Hyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.9
    • /
    • pp.935-944
    • /
    • 2006
  • EMG pattern recognition is essential for the control of a multifunction myoelectric hand. The main goal of this study is to develop an efficient feature projection method for EMC pattern recognition. To this end, we propose a linear supervised feature projection that utilizes linear discriminant analysis (LDA). We first perform wavelet packet transform (WPT) to extract the feature vector from four channel EMC signals. For dimensionality reduction and clustering of the WPT features, the LDA incorporates class information into the learning procedure, and finds a linear matrix to maximize the class separability for the projected features. Finally, the multilayer perceptron classifies the LDA-reduced features into nine hand motions. To evaluate the performance of LDA for the WPT features, we compare LDA with three other feature projection methods. From a visualization and quantitative comparison, we show that LDA has better performance for the class separability, and the LDA-projected features improve the classification accuracy with a short processing time. We implemented a real-time pattern recognition system for a multifunction myoelectric hand. In experiment, we show that the proposed method achieves 97.2% recognition accuracy, and that all processes, including the generation of control commands for myoelectric hand, are completed within 97 msec. These results confirm that our method is applicable to real-time EMG pattern recognition far myoelectric hand control.

Datawise Discriminant Analysis For Feature Extraction (자료별 분류분석(DDA)에 의한 특징추출)

  • Park, Myoung-Soo;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.90-95
    • /
    • 2009
  • This paper presents a new feature extraction algorithm which can deal with the problems of linear discriminant analysis, widely used for linear dimensionality reduction. The scatter matrices included in linear discriminant analysis are defined by the distances between each datum and its class mean, and those between class means and mean of whole data. Use of these scatter matrices can cause computational problems and the limitation on the number of features. In addition, these definition assumes that the data distribution is unimodal and normal, for the cases not satisfying this assumption the appropriate features are not achieved. In this paper we define a new scatter matrix which is based on the differently weighted distances between individual data, and presents a feature extraction algorithm using this scatter matrix. With this new method. the mentioned problems of linear discriminant analysis can be avoided, and the features appropriate for discriminating data can be achieved. The performance of this new method is shown by experiments.

Performance Comparison of Deep Feature Based Speaker Verification Systems (깊은 신경망 특징 기반 화자 검증 시스템의 성능 비교)

  • Kim, Dae Hyun;Seong, Woo Kyeong;Kim, Hong Kook
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.9-16
    • /
    • 2015
  • In this paper, several experiments are performed according to deep neural network (DNN) based features for the performance comparison of speaker verification (SV) systems. To this end, input features for a DNN, such as mel-frequency cepstral coefficient (MFCC), linear-frequency cepstral coefficient (LFCC), and perceptual linear prediction (PLP), are first compared in a view of the SV performance. After that, the effect of a DNN training method and a structure of hidden layers of DNNs on the SV performance is investigated depending on the type of features. The performance of an SV system is then evaluated on the basis of I-vector or probabilistic linear discriminant analysis (PLDA) scoring method. It is shown from SV experiments that a tandem feature of DNN bottleneck feature and MFCC feature gives the best performance when DNNs are configured using a rectangular type of hidden layers and trained with a supervised training method.

UFKLDA: An unsupervised feature extraction algorithm for anomaly detection under cloud environment

  • Wang, GuiPing;Yang, JianXi;Li, Ren
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.684-695
    • /
    • 2019
  • In a cloud environment, performance degradation, or even downtime, of virtual machines (VMs) usually appears gradually along with anomalous states of VMs. To better characterize the state of a VM, all possible performance metrics are collected. For such high-dimensional datasets, this article proposes a feature extraction algorithm based on unsupervised fuzzy linear discriminant analysis with kernel (UFKLDA). By introducing the kernel method, UFKLDA can not only effectively deal with non-Gaussian datasets but also implement nonlinear feature extraction. Two sets of experiments were undertaken. In discriminability experiments, this article introduces quantitative criteria to measure discriminability among all classes of samples. The results show that UFKLDA improves discriminability compared with other popular feature extraction algorithms. In detection accuracy experiments, this article computes accuracy measures of an anomaly detection algorithm (i.e., C-SVM) on the original performance metrics and extracted features. The results show that anomaly detection with features extracted by UFKLDA improves the accuracy of detection in terms of sensitivity and specificity.

Speaker Adaptation Using ICA-Based Feature Transformation

  • Jung, Ho-Young;Park, Man-Soo;Kim, Hoi-Rin;Hahn, Min-Soo
    • ETRI Journal
    • /
    • v.24 no.6
    • /
    • pp.469-472
    • /
    • 2002
  • Speaker adaptation techniques are generally used to reduce speaker differences in speech recognition. In this work, we focus on the features fitted to a linear regression-based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the feature transformation matrices are estimated from the training data and adaptation data. Since the adaptation data is not sufficient to reliably estimate the ICA-based feature transformation matrix, it is necessary to adjust the ICA-based feature transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method through a linear interpolation between the speaker-independent (SI) feature transformation matrix and the speaker-dependent (SD) feature transformation matrix. From our experiments, we observed that the proposed method is more effective in the mismatched case. In the mismatched case, the adaptation performance is improved because the smoothed feature transformation matrix makes speaker adaptation using noisy speech more robust.

  • PDF