• Title/Summary/Keyword: Facial Image Classification

Search Result 74, Processing Time 0.027 seconds

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

Landmark Selection Using CNN-Based Heat Map for Facial Age Prediction (안면 연령 예측을 위한 CNN기반의 히트 맵을 이용한 랜드마크 선정)

  • Hong, Seok-Mi;Yoo, Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.1-6
    • /
    • 2021
  • The purpose of this study is to improve the performance of the artificial neural network system for facial image analysis through the image landmark selection technique. For landmark selection, a CNN-based multi-layer ResNet model for classification of facial image age is required. From the configured ResNet model, a heat map that detects the change of the output node according to the change of the input node is extracted. By combining a plurality of extracted heat maps, facial landmarks related to age classification prediction are created. The importance of each pixel location can be analyzed through facial landmarks. In addition, by removing the pixels with low weights, a significant amount of input data can be reduced.

Hybrid Neural Classifier Combined with H-ART2 and F-LVQ for Face Recognition

  • Kim, Do-Hyeon;Cha, Eui-Young;Kim, Kwang-Baek
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1287-1292
    • /
    • 2005
  • This paper presents an effective pattern classification model by designing an artificial neural network based pattern classifiers for face recognition. First, a RGB image inputted from a frame grabber is converted into a HSV image which is similar to the human beings' vision system. Then, the coarse facial region is extracted using the hue(H) and saturation(S) components except intensity(V) component which is sensitive to the environmental illumination. Next, the fine facial region extraction process is performed by matching with the edge and gray based templates. To make a light-invariant and qualified facial image, histogram equalization and intensity compensation processing using illumination plane are performed. The finally extracted and enhanced facial images are used for training the pattern classification models. The proposed H-ART2 model which has the hierarchical ART2 layers and F-LVQ model which is optimized by fuzzy membership make it possible to classify facial patterns by optimizing relations of clusters and searching clustered reference patterns effectively. Experimental results show that the proposed face recognition system is as good as the SVM model which is famous for face recognition field in recognition rate and even better in classification speed. Moreover high recognition rate could be acquired by combining the proposed neural classification models.

  • PDF

Cold sensitivity classification using facial image based on convolutional neural network

  • lkoo Ahn;Younghwa Baek;Kwang-Ho Bae;Bok-Nam Seo;Kyoungsik Jung;Siwoo Lee
    • The Journal of Korean Medicine
    • /
    • v.44 no.4
    • /
    • pp.136-149
    • /
    • 2023
  • Objectives: Facial diagnosis is an important part of clinical diagnosis in traditional East Asian Medicine. In this paper, we proposed a model to quantitatively classify cold sensitivity using a fully automated facial image analysis system. Methods: We investigated cold sensitivity in 452 subjects. Cold sensitivity was determined using a questionnaire and the Cold Pattern Score (CPS) was used for analysis. Subjects with a CPS score below the first quartile (low CPS group) belonged to the cold non-sensitivity group, and subjects with a CPS score above the third quartile (high CPS group) belonged to the cold sensitivity group. After splitting the facial images into train/validation/test sets, the train and validation set were input into a convolutional neural network to learn the model, and then the classification accuracy was calculated for the test set. Results: The classification accuracy of the low CPS group and high CPS group using facial images in all subjects was 76.17%. The classification accuracy by sex was 69.91% for female and 62.86% for male. It is presumed that the deep learning model used facial color or facial shape to classify the low CPS group and the high CPS group, but it is difficult to specifically determine which feature was more important. Conclusions: The experimental results of this study showed that the low CPS group and the high CPS group can be classified with a modest level of accuracy using only facial images. There was a need to develop more advanced models to increase classification accuracy.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.

Region-Based Facial Expression Recognition in Still Images

  • Nagi, Gawed M.;Rahmat, Rahmita O.K.;Khalid, Fatimah;Taufik, Muhamad
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.173-188
    • /
    • 2013
  • In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs.-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach.

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Facial Expression Recognition by Combining Adaboost and Neural Network Algorithms (에이다부스트와 신경망 조합을 이용한 표정인식)

  • Hong, Yong-Hee;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.806-813
    • /
    • 2010
  • Human facial expression shows human's emotion most exactly, so it can be used as the most efficient tool for delivering human's intention to computer. For fast and exact recognition of human's facial expression on a 2D image, this paper proposes a new method which integrates an Discrete Adaboost classification algorithm and a neural network based recognition algorithm. In the first step, Adaboost algorithm finds the position and size of a face in the input image. Second, input detected face image into 5 Adaboost strong classifiers which have been trained for each facial expressions. Finally, neural network based recognition algorithm which has been trained with the outputs of Adaboost strong classifiers determines final facial expression result. The proposed algorithm guarantees the realtime and enhanced accuracy by utilizing fastness and accuracy of Adaboost classification algorithm and reliability of neural network based recognition algorithm. In this paper, the proposed algorithm recognizes five facial expressions such as neutral, happiness, sadness, anger and surprise and achieves 86~95% of accuracy depending on the expression types in real time.

Facial Image Type Classification and Shape Differences focus on 20s Korean Women (20대 한국여성의 얼굴이미지 유형과 형태적 특성)

  • Baek, Kyoung-Jin;Kim, Young-In
    • Journal of the Korean Society of Costume
    • /
    • v.64 no.3
    • /
    • pp.62-76
    • /
    • 2014
  • The purpose of this study is to classify the facial images and analyze shape characteristics of Korean women in their 20s. Previous research and survey were used for the study, the surveys targeted 220 university students in their 20s. The subjects of the experiment were 20-24 year-old Korean women. SPSS 12.0 statistics program was used to analyze the results, and factor analysis, Cronbach's ${\alpha}$ reliability analysis, and multidimensional scaling(MDS) were executed. The results of the study are as follows: First, the facial image types of Korean women in their 20s were classified into 4 categories as 'Youthfulness', 'Classiness', 'Friendliness', and 'Activeness'. Second, the multi-dimensional scaling method was performed and two orthogonal dimensions for the facial image of the Korean women were suggested: strong - soft and classy-friendly. Third, by analyzing the basic statistics concerning the structural characteristics of facial image of Korean women, there were differences in structural characteristics that form the facial images. Especially, significant difference appeared in items related forehead, eyebrows, eyes and jaw.

Study for Classification of Facial Expression using Distance Features of Facial Landmarks (얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구)

  • Bae, Jin Hee;Wang, Bo Hyeon;Lim, Joon S.
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.613-618
    • /
    • 2021
  • Facial expression recognition has long been established as a subject of continuous research in various fields. In this paper, the relationship between each landmark is analyzed using the features obtained by calculating the distance between the facial landmarks in the image, and five facial expressions are classified. We increased data and label reliability based on our labeling work with multiple observers. In addition, faces were recognized from the original data and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that are relatively more helpful for classification. We performed facial recognition classification and analysis with the method proposed in this paper, which shows the validity and effectiveness of the proposed method.