• Title/Summary/Keyword: kernel discriminant analysis

Search Result 25, Processing Time 0.024 seconds

A Kernel Approach to Discriminant Analysis for Binary Classification

  • Shin, Yang-Kyu
    • Journal of the Korean Data and Information Science Society
    • /
    • v.12 no.2
    • /
    • pp.83-93
    • /
    • 2001
  • We investigate a kernel approach to discriminant analysis for binary classification as a machine learning point of view. Our view of the kernel approach follows support vector method which is one of the most promising techniques in the area of machine learning. As usual discriminant analysis, the kernel method can discriminate an object most likely belongs to. Moreover, it has some advantage over discriminant analysis such as data compression and computing time.

  • PDF

Kernel Fisher Discriminant Analysis for Indoor Localization

  • Ngo, Nhan V.T.;Park, Kyung Yong;Kim, Jeong G.
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.177-185
    • /
    • 2015
  • In this paper we introduce Kernel Fisher Discriminant Analysis (KFDA) to transform our database of received signal strength (RSS) measurements into a smaller dimension space to maximize the difference between reference points (RP) as possible. By KFDA, we can efficiently utilize RSS data than other method so that we can achieve a better performance.

Sonar Target Classification using Generalized Discriminant Analysis (일반화된 판별분석 기법을 이용한 능동소나 표적 식별)

  • Kim, Dong-wook;Kim, Tae-hwan;Seok, Jong-won;Bae, Keun-sung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.1
    • /
    • pp.125-130
    • /
    • 2018
  • Linear discriminant analysis is a statistical analysis method that is generally used for dimensionality reduction of the feature vectors or for class classification. However, in the case of a data set that cannot be linearly separated, it is possible to make a linear separation by mapping a feature vector into a higher dimensional space using a nonlinear function. This method is called generalized discriminant analysis or kernel discriminant analysis. In this paper, we carried out target classification experiments with active sonar target signals available on the Internet using both liner discriminant and generalized discriminant analysis methods. Experimental results are analyzed and compared with discussions. For 104 test data, LDA method has shown correct recognition rate of 73.08%, however, GDA method achieved 95.19% that is also better than the conventional MLP or kernel-based SVM.

Kernel Fisher Discriminant Analysis for Natural Gait Cycle Based Gait Recognition

  • Huang, Jun;Wang, Xiuhui;Wang, Jun
    • Journal of Information Processing Systems
    • /
    • v.15 no.4
    • /
    • pp.957-966
    • /
    • 2019
  • This paper studies a novel approach to natural gait cycles based gait recognition via kernel Fisher discriminant analysis (KFDA), which can effectively calculate the features from gait sequences and accelerate the recognition process. The proposed approach firstly extracts the gait silhouettes through moving object detection and segmentation from each gait videos. Secondly, gait energy images (GEIs) are calculated for each gait videos, and used as gait features. Thirdly, KFDA method is used to refine the extracted gait features, and low-dimensional feature vectors for each gait videos can be got. The last is the nearest neighbor classifier is applied to classify. The proposed method is evaluated on the CASIA and USF gait databases, and the results show that our proposed algorithm can get better recognition effect than other existing algorithms.

An Adaptive Face Recognition System Based on a Novel Incremental Kernel Nonparametric Discriminant Analysis

  • SOULA, Arbia;SAID, Salma BEN;KSANTINI, Riadh;LACHIRI, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2129-2147
    • /
    • 2019
  • This paper introduces an adaptive face recognition method based on a Novel Incremental Kernel Nonparametric Discriminant Analysis (IKNDA) that is able to learn through time. More precisely, the IKNDA has the advantage of incrementally reducing data dimension, in a discriminative manner, as new samples are added asynchronously. Thus, it handles dynamic and large data in a better way. In order to perform face recognition effectively, we combine the Gabor features and the ordinal measures to extract the facial features that are coded across local parts, as visual primitives. The variegated ordinal measures are extraught from Gabor filtering responses. Then, the histogram of these primitives, across a variety of facial zones, is intermingled to procure a feature vector. This latter's dimension is slimmed down using PCA. Finally, the latter is treated as a facial vector input for the advanced IKNDA. A comparative evaluation of the IKNDA is performed for face recognition, besides, for other classification endeavors, in a decontextualized evaluation schemes. In such a scheme, we compare the IKNDA model to some relevant state-of-the-art incremental and batch discriminant models. Experimental results show that the IKNDA outperforms these discriminant models and is better tool to improve face recognition performance.

UFKLDA: An unsupervised feature extraction algorithm for anomaly detection under cloud environment

  • Wang, GuiPing;Yang, JianXi;Li, Ren
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.684-695
    • /
    • 2019
  • In a cloud environment, performance degradation, or even downtime, of virtual machines (VMs) usually appears gradually along with anomalous states of VMs. To better characterize the state of a VM, all possible performance metrics are collected. For such high-dimensional datasets, this article proposes a feature extraction algorithm based on unsupervised fuzzy linear discriminant analysis with kernel (UFKLDA). By introducing the kernel method, UFKLDA can not only effectively deal with non-Gaussian datasets but also implement nonlinear feature extraction. Two sets of experiments were undertaken. In discriminability experiments, this article introduces quantitative criteria to measure discriminability among all classes of samples. The results show that UFKLDA improves discriminability compared with other popular feature extraction algorithms. In detection accuracy experiments, this article computes accuracy measures of an anomaly detection algorithm (i.e., C-SVM) on the original performance metrics and extracted features. The results show that anomaly detection with features extracted by UFKLDA improves the accuracy of detection in terms of sensitivity and specificity.

Palatability Grading Analysis of Hanwoo Beef using Sensory Properties and Discriminant Analysis (관능특성 및 판별함수를 이용한 한우고기 맛 등급 분석)

  • Cho, Soo-Hyun;Seo, Gu-Reo-Un-Dal-Nim;Kim, Dong-Hun;Kim, Jae-Hee
    • Food Science of Animal Resources
    • /
    • v.29 no.1
    • /
    • pp.132-139
    • /
    • 2009
  • The objective of this study was to investigate the most effective analysis methods for palatability grading of Hanwoo beef by comparing the results of discriminant analysis with sensory data. The sensory data were obtained from sensory testing by 1,300 consumers evaluated tenderness, juiciness, flavor-likeness and overall acceptability of Hanwoo beef samples prepared by boiling, roasting and grilling cooking methods. For the discriminant analysis with one factor, overall acceptability, the linear discriminant functions and the non-parametric discriminant function with the Gaussian kernel were estimated. The linear discriminant functions were simple and easy to understand while the non-parametric discriminant functions were not explicit and had the problem of selection of kernel function and bandwidth. With the three palatability factors such as tenderness, juiciness and flavor-likeness, the canonical discriminant analysis was used and the ability of classification was calculated with the accurate classification rate and the error rate. The canonical discriminant analysis did not need the specific distributional assumptions and only used the principal component and canonical correlation. Also, it contained the function of 3 factors (tenderness, juiciness and flavor-likeness) and accurate classification rate was similar with the other discriminant methods. Therefore, the canonical discriminant analysis was the most proper method to analyze the palatability grading of Hanwoo beef.

Nonlinear Feature Extraction using Class-augmented Kernel PCA (클래스가 부가된 커널 주성분분석을 이용한 비선형 특징추출)

  • Park, Myoung-Soo;Oh, Sang-Rok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.7-12
    • /
    • 2011
  • In this papwer, we propose a new feature extraction method, named as Class-augmented Kernel Principal Component Analysis (CA-KPCA), which can extract nonlinear features for classification. Among the subspace method that was being widely used for feature extraction, Class-augmented Principal Component Analysis (CA-PCA) is a recently one that can extract features for a accurate classification without computational difficulties of other methods such as Linear Discriminant Analysis (LDA). However, the features extracted by CA-PCA is still restricted to be in a linear subspace of the original data space, which limites the use of this method for various problems requiring nonlinear features. To resolve this limitation, we apply a kernel trick to develop a new version of CA-PCA to extract nonlinear features, and evaluate its performance by experiments using data sets in the UCI Machine Learning Repository.

Speaker Identification Using an Ensemble of Feature Enhancement Methods (특징 강화 방법의 앙상블을 이용한 화자 식별)

  • Yang, IL-Ho;Kim, Min-Seok;So, Byung-Min;Kim, Myung-Jae;Yu, Ha-Jin
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.71-78
    • /
    • 2011
  • In this paper, we propose an approach which constructs classifier ensembles of various channel compensation and feature enhancement methods. CMN and CMVN are used as channel compensation methods. PCA, kernel PCA, greedy kernel PCA, and kernel multimodal discriminant analysis are used as feature enhancement methods. The proposed ensemble system is constructed with the combination of 15 classifiers which include three channel compensation methods (including 'without compensation') and five feature enhancement methods (including 'without enhancement'). Experimental results show that the proposed ensemble system gives highest average speaker identification rate in various environments (channels, noises, and sessions).

  • PDF

Improvement in Supervector Linear Kernel SVM for Speaker Identification Using Feature Enhancement and Training Length Adjustment (특징 강화 기법과 학습 데이터 길이 조절에 의한 Supervector Linear Kernel SVM 화자식별 개선)

  • So, Byung-Min;Kim, Kyung-Wha;Kim, Min-Seok;Yang, Il-Ho;Kim, Myung-Jae;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.6
    • /
    • pp.330-336
    • /
    • 2011
  • In this paper, we propose a new method to improve the performance of supervector linear kernel SVM (Support Vector Machine) for speaker identification. This method is based on splitting one training datum into several pieces of utterances. We use four different databases for evaluating performance and use PCA (Principal Component Analysis), GKPCA (Greedy Kernel PCA) and KMDA (Kernel Multimodal Discriminant Analysis) for feature enhancement. As a result, the proposed method shows improved performance for speaker identification using supervector linear kernel SVM.