• Title/Summary/Keyword: feature vector classification

Search Result 533, Processing Time 0.023 seconds

A Study on Face Recognition and Reliability Improvement Using Classification Analysis Technique

  • Kim, Seung-Jae
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.192-197
    • /
    • 2020
  • In this study, we try to find ways to recognize face recognition more stably and to improve the effectiveness and reliability of face recognition. In order to improve the face recognition rate, a lot of data must be used, but that does not necessarily mean that the recognition rate is improved. Another criterion for improving the recognition rate can be seen that the top/bottom of the recognition rate is determined depending on how accurately or precisely the degree of classification of the data to be used is made. There are various methods for classification analysis, but in this study, classification analysis is performed using a support vector machine (SVM). In this study, feature information is extracted using a normalized image with rotation information, and then projected onto the eigenspace to investigate the relationship between the feature values through the classification analysis of SVM. Verification through classification analysis can improve the effectiveness and reliability of various recognition fields such as object recognition as well as face recognition, and will be of great help in improving recognition rates.

Gait Recognition Algorithm Based on Feature Fusion of GEI Dynamic Region and Gabor Wavelets

  • Huang, Jun;Wang, Xiuhui;Wang, Jun
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.892-903
    • /
    • 2018
  • The paper proposes a novel gait recognition algorithm based on feature fusion of gait energy image (GEI) dynamic region and Gabor, which consists of four steps. First, the gait contour images are extracted through the object detection, binarization and morphological process. Secondly, features of GEI at different angles and Gabor features with multiple orientations are extracted from the dynamic part of GEI, respectively. Then averaging method is adopted to fuse features of GEI dynamic region with features of Gabor wavelets on feature layer and the feature space dimension is reduced by an improved Kernel Principal Component Analysis (KPCA). Finally, the vectors of feature fusion are input into the support vector machine (SVM) based on multi classification to realize the classification and recognition of gait. The primary contributions of the paper are: a novel gait recognition algorithm based on based on feature fusion of GEI and Gabor is proposed; an improved KPCA method is used to reduce the feature matrix dimension; a SVM is employed to identify the gait sequences. The experimental results suggest that the proposed algorithm yields over 90% of correct classification rate, which testify that the method can identify better different human gait and get better recognized effect than other existing algorithms.

Binary classification by the combination of Adaboost and feature extraction methods (특징 추출 알고리즘과 Adaboost를 이용한 이진분류기)

  • Ham, Seaung-Lok;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.4
    • /
    • pp.42-53
    • /
    • 2012
  • In pattern recognition and machine learning society, classification has been a classical problem and the most widely researched area. Adaptive boosting also known as Adaboost has been successfully applied to binary classification problems. It is a kind of boosting algorithm capable of constructing a strong classifier through a weighted combination of weak classifiers. On the other hand, the PCA and LDA algorithms are the most popular linear feature extraction methods used mainly for dimensionality reduction. In this paper, the combination of Adaboost and feature extraction methods is proposed for efficient classification of two class data. Conventionally, in classification problems, the roles of feature extraction and classification have been distinct, i.e., a feature extraction method and a classifier are applied sequentially to classify input variable into several categories. In this paper, these two steps are combined into one resulting in a good classification performance. More specifically, each projection vector is treated as a weak classifier in Adaboost algorithm to constitute a strong classifier for binary classification problems. The proposed algorithm is applied to UCI dataset and FRGC dataset and showed better recognition rates than sequential application of feature extraction and classification methods.

Seabed Sediment Feature Extraction Algorithm using Attenuation Coefficient Variation According to Frequency (주파수에 따른 감쇠계수 변화량을 이용한 해저 퇴적물 특징 추출 알고리즘)

  • Lee, Kibae;Kim, Juho;Lee, Chong Hyun;Bae, Jinho;Lee, Jaeil;Cho, Jung Hong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.111-120
    • /
    • 2017
  • In this paper, we propose novel feature extraction algorithm for classification of seabed sediment. In previous researches, acoustic reflection coefficient has been used to classify seabed sediments, which is constant in terms of frequency. However, attenuation of seabed sediment is a function of frequency and is highly influenced by sediment types in general. Hence, we developed a feature vector by using attenuation variation with respect to frequency. The attenuation variation is obtained by using reflected signal from the second sediment layer, which is generated by broadband chirp. The proposed feature vector has advantage in number of dimensions to classify the seabed sediment over the classical scalar feature (reflection coefficient). To compare the proposed feature with the classical scalar feature, dimension of proposed feature vector is reduced by using linear discriminant analysis (LDA). Synthesised acoustic amplitudes reflected by seabed sediments are generated by using Biot model and the performance of proposed feature is evaluated by using Fisher scoring and classification accuracy computed by maximum likelihood decision (MLD). As a result, the proposed feature shows higher discrimination performance and more robustness against measurement errors than that of classical feature.

Feature Vector Extraction and Classification Performance Comparison According to Various Settings of Classifiers for Fault Detection and Classification of Induction Motor (유도 전동기의 고장 검출 및 분류를 위한 특징 벡터 추출과 분류기의 다양한 설정에 따른 분류 성능 비교)

  • Kang, Myeong-Su;Nguyen, Thu-Ngoc;Kim, Yong-Min;Kim, Cheol-Hong;Kim, Jong-Myon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.8
    • /
    • pp.446-460
    • /
    • 2011
  • The use of induction motors has been recently increasing with automation in aeronautical and automotive industries, and it playes a significant role. This has motivated that many researchers have studied on developing fault detection and classification systems of an induction motor in order to minimize economical damage caused by its fault. With this reason, this paper proposed feature vector extraction methods based on STE (short-time energy)+SVD (singular value decomposition) and DCT (discrete cosine transform)+SVD techniques to early detect and diagnose faults of induction motors, and classified faults of an induction motor into different types of them by using extracted features as inputs of BPNN (back propagation neural network) and multi-layer SVM (support vector machine). When BPNN and multi-lay SVM are used as classifiers for fault classification, there are many settings that affect classification performance: the number of input layers, the number of hidden layers and learning algorithms for BPNN, and standard deviation values of Gaussian radial basis function for multi-layer SVM. Therefore, this paper quantitatively simulated to find appropriate settings for those classifiers yielding higher classification performance than others.

The Comparison of Speech Feature Parameters for Emotion Recognition (감정 인식을 위한 음성의 특징 파라메터 비교)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

Assessment of Classification Accuracy of fNIRS-Based Brain-computer Interface Dataset Employing Elastic Net-Based Feature Selection (Elastic net 기반 특징 선택을 적용한 fNIRS 기반 뇌-컴퓨터 인터페이스 데이터셋 분류 정확도 평가)

  • Shin, Jaeyoung
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.6
    • /
    • pp.268-276
    • /
    • 2021
  • Functional near-infrared spectroscopy-based brain-computer interface (fNIRS-based BCI) has been receiving much attention. However, we are practically constrained to obtain a lot of fNIRS data by inherent hemodynamic delay. For this reason, when employing machine learning techniques, a problem due to the high-dimensional feature vector may be encountered, such as deteriorated classification accuracy. In this study, we employ an elastic net-based feature selection which is one of the embedded methods and demonstrate the utility of which by analyzing the results. Using the fNIRS dataset obtained from 18 participants for classifying brain activation induced by mental arithmetic and idle state, we calculated classification accuracies after performing feature selection while changing the parameter α (weight of lasso vs. ridge regularization). Grand averages of classification accuracy are 80.0 ± 9.4%, 79.3 ± 9.6%, 79.0 ± 9.2%, 79.7 ± 10.1%, 77.6 ± 10.3%, 79.2 ± 8.9%, and 80.0 ± 7.8% for the various values of α = 0.001, 0.005, 0.01, 0.05, 0.1, 0.2, and 0.5, respectively, and are not statistically different from the grand average of classification accuracy estimated with all features (80.1 ± 9.5%). As a result, no difference in classification accuracy is revealed for all considered parameter α values. Especially for α = 0.5, we are able to achieve the statistically same level of classification accuracy with even 16.4% features of the total features. Since elastic net-based feature selection can be easily applied to other cases without complicated initialization and parameter fine-tuning, we can be looking forward to seeing that the elastic-based feature selection can be actively applied to fNIRS data.

Autonomous Feeding Robot and its Ultrasonic Obstacle Classification System (자동 사료 급이 로봇과 초음파 장애물 분류 시스템)

  • Kim, Seung-Gi;Lee, Yong-Chan;Ahn, Sung-Su;Lee, Yun-Jung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.8
    • /
    • pp.1089-1098
    • /
    • 2018
  • In this paper, we propose an autonomous feeding robot and its obstacle classification system using ultrasonic sensors to secure the driving safety of the robot and efficient feeding operation. The developed feeding robot is verified by operation experiments in a cattle shed. In the proposed classification algorithm, not only the maximum amplitude of the ultrasonic echo signal but also two gradients of the signal and the variation of amplitude are considered as the feature parameters for object classification. The experimental results show the efficiency of the proposed classification method based on the Support Vector Machine, which is able to classify objects or obstacles such as a human, a cow, a fence and a wall.

Classification Performance Analysis of Silicon Wafer Micro-Cracks Based on SVM (SVM 기반 실리콘 웨이퍼 마이크로크랙의 분류성능 분석)

  • Kim, Sang Yeon;Kim, Gyung Bum
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.33 no.9
    • /
    • pp.715-721
    • /
    • 2016
  • In this paper, the classification rate of micro-cracks in silicon wafers was improved using a SVM. In case I, we investigated how feature data of micro-cracks and SVM parameters affect a classification rate. As a result, weighting vector and bias did not affect the classification rate, which was improved in case of high cost and sigmoid kernel function. Case II was performed using a more high quality image than that in case I. It was identified that learning data and input data had a large effect on the classification rate. Finally, images from cases I and II and another illumination system were used in case III. In spite of different condition images, good classification rates was achieved. Critical points for micro-crack classification improvement are SVM parameters, kernel function, clustered feature data, and experimental conditions. In the future, excellent results could be obtained through SVM parameter tuning and clustered feature data.

Feature Extraction Method Using the Bhattacharyya Distance (Bhattacharyya distance 기반 특징 추출 기법)

  • Choi, Eui-Sun;Lee, Chul-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.6
    • /
    • pp.38-47
    • /
    • 2000
  • In pattern classification, the Bhattacharyya distance has been used as a class separability measure. Furthemore, it is recently reported that the Bhattacharyya distance can be used to estimate error of Gaussian ML classifier within 1-2% margin. In this paper, we propose a feature extraction method utilizing the Bhattacharyya distance. In the proposed method, we first predict the classification error with the error estimation equation based on the Bhauacharyya distance. Then we find the feature vector that minimizes the classification error using two search algorithms: sequential search and global search. Experimental reslts show that the proposed method compares favorably with conventional feature extraction methods. In addition, it is possible to determine how man, feature vectors arc needed for achieving the same classification accuracy as in the original space.

  • PDF