• Title/Summary/Keyword: facial recognition

Search Result 711, Processing Time 0.027 seconds

Feature Extraction Method of 2D-DCT for Facial Expression Recognition (얼굴 표정인식을 위한 2D-DCT 특징추출 방법)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.3
    • /
    • pp.135-138
    • /
    • 2014
  • This paper devices a facial expression recognition method robust to overfitting using 2D-DCT and EHMM algorithm. In particular, this paper achieves enhanced recognition performance by setting up a large window size for 2D-DCT feature extraction and extracting the observation vectors of EHMM. The experimental results on the CK facial expression database and the JAFFE facial expression database showed that the facial expression recognition accuracy was improved according as window size is large. Also, the proposed method revealed the recognition accuracy of 87.79% and showed enhanced recognition performance ranging from 46.01% to 50.05% in comparison to previous approaches based on histogram feature, when CK database is employed for training and JAFFE database is used to test the recognition accuracy.

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.

Facial Expression Recognition with Fuzzy C-Means Clusstering Algorithm and Neural Network Based on Gabor Wavelets

  • Youngsuk Shin;Chansup Chung;Lee, Yillbyung
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.126-132
    • /
    • 2000
  • This paper presents a facial expression recognition based on Gabor wavelets that uses a fuzzy C-means(FCM) clustering algorithm and neural network. Features of facial expressions are extracted to two steps. In the first step, Gabor wavelet representation can provide edges extraction of major face components using the average value of the image's 2-D Gabor wavelet coefficient histogram. In the next step, we extract sparse features of facial expressions from the extracted edge information using FCM clustering algorithm. The result of facial expression recognition is compared with dimensional values of internal stated derived from semantic ratings of words related to emotion. The dimensional model can recognize not only six facial expressions related to Ekman's basic emotions, but also expressions of various internal states.

  • PDF

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

Local Feature Based Facial Expression Recognition Using Adaptive Decision Tree (적응형 결정 트리를 이용한 국소 특징 기반 표정 인식)

  • Oh, Jihun;Ban, Yuseok;Lee, Injae;Ahn, Chunghyun;Lee, Sangyoun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.2
    • /
    • pp.92-99
    • /
    • 2014
  • This paper proposes the method of facial expression recognition based on decision tree structure. In the image of facial expression, ASM(Active Shape Model) and LBP(Local Binary Pattern) make the local features of a facial expressions extracted. The discriminant features gotten from local features make the two facial expressions of all combination classified. Through the sum of true related to classification, the combination of facial expression and local region are decided. The integration of branch classifications generates decision tree. The facial expression recognition based on decision tree shows better recognition performance than the method which doesn't use that.

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

Analogical Face Generation based on Feature Points

  • Yoon, Andy Kyung-yong;Park, Ki-cheul;Oh, Duck-kyo;Cho, Hye-young;Jang, Jung-hyuk
    • Journal of Multimedia Information System
    • /
    • v.6 no.1
    • /
    • pp.15-22
    • /
    • 2019
  • There are many ways to perform face recognition. The first step of face recognition is the face detection step. If the face is not found in the first step, the face recognition fails. Face detection research has many difficulties because it can be varied according to face size change, left and right rotation and up and down rotation, side face and front face, facial expression, and light condition. In this study, facial features are extracted and the extracted features are geometrically reconstructed in order to improve face recognition rate in extracted face region. Also, it is aimed to adjust face angle using reconstructed facial feature vector, and to improve recognition rate for each face angle. In the recognition attempt using the result after the geometric reconstruction, both the up and down and the left and right facial angles have improved recognition performance.

Improvement of Face Recognition Rate by Normalization of Facial Expression (표정 정규화를 통한 얼굴 인식율 개선)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.477-486
    • /
    • 2008
  • Facial expression, which changes face geometry, usually has an adverse effect on the performance of a face recognition system. To improve the face recognition rate, we propose a normalization method of facial expression to diminish the difference of facial expression between probe and gallery faces. Two approaches are used to facial expression modeling and normalization from single still images using a generic facial muscle model without the need of large image databases. The first approach estimates the geometry parameters of linear muscle models to obtain a biologically inspired model of the facial expression which may be changed intuitively afterwards. The second approach uses RBF(Radial Basis Function) based interpolation and warping to normalize the facial muscle model as unexpressed face according to the given expression. As a preprocessing stage for face recognition, these approach could achieve significantly higher recognition rates than in the un-normalized case based on the eigenface approach, local binary patterns and a grey-scale correlation measure.

Difference of Facial Emotion Recognition and Discrimination between Children with Attention-Deficit Hyperactivity Disorder and Autism Spectrum Disorder (주의력결핍과잉행동장애 아동과 자폐스펙트럼장애 아동에서 얼굴 표정 정서 인식과 구별의 차이)

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.27 no.3
    • /
    • pp.207-215
    • /
    • 2016
  • Objectives: This study aimed to investigate the differences in the facial emotion recognition and discrimination ability between children with attention-deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Fifty-three children aged 7 to 11 years participated in this study. Among them, 43 were diagnosed with ADHD and 10 with ASD. The parents of the participants completed the Korean version of the Child Behavior Checklist, ADHD Rating Scale and Conner's scale. The participants completed the Korean Wechsler Intelligence Scale for Children-fourth edition and Advanced Test of Attention (ATA), Penn Emotion Recognition Task and Penn Emotion Discrimination Task. The group differences in the facial emotion recognition and discrimination ability were analyzed by using analysis of covariance for the purpose of controlling the visual omission error index of ATA. Results: The children with ADHD showed better recognition of happy and sad faces and less false positive neutral responses than those with ASD. Also, the children with ADHD recognized emotions better than those with ASD on female faces and in extreme facial expressions, but not on male faces or in mild facial expressions. We found no differences in the facial emotion discrimination between the children with ADHD and ASD. Conclusion: Our results suggest that children with ADHD recognize facial emotions better than children with ASD, but they still have deficits. Interventions which consider their different emotion recognition and discrimination abilities are needed.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.