• Title/Summary/Keyword: Recognition of Facial Expressions

Search Result 121, Processing Time 0.023 seconds

A Study on Emotion Recognition Systems based on the Probabilistic Relational Model Between Facial Expressions and Physiological Responses (생리적 내재반응 및 얼굴표정 간 확률 관계 모델 기반의 감정인식 시스템에 관한 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.513-519
    • /
    • 2013
  • The current vision-based approaches for emotion recognition, such as facial expression analysis, have many technical limitations in real circumstances, and are not suitable for applications that use them solely in practical environments. In this paper, we propose an approach for emotion recognition by combining extrinsic representations and intrinsic activities among the natural responses of humans which are given specific imuli for inducing emotional states. The intrinsic activities can be used to compensate the uncertainty of extrinsic representations of emotional states. This combination is done by using PRMs (Probabilistic Relational Models) which are extent version of bayesian networks and are learned by greedy-search algorithms and expectation-maximization algorithms. Previous research of facial expression-related extrinsic emotion features and physiological signal-based intrinsic emotion features are combined into the attributes of the PRMs in the emotion recognition domain. The maximum likelihood estimation with the given dependency structure and estimated parameter set is used to classify the label of the target emotional states.

Face recognition using a sparse population coding model for receptive field formation of the simple cells in the primary visual cortex (주 시각피질에서의 단순세포 수용영역 형성에 대한 성긴 집단부호 모델을 이용한 얼굴이식)

  • 김종규;장주석;김영일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.10
    • /
    • pp.43-50
    • /
    • 1997
  • In this paper, we present a method that can recognize face images by use of a sparse population code that is a learning model about a receptive fields of the simple cells in the primary visual cortex. Twenty front-view facial images form twenty persons were used for the training process, and 200 varied facial images, 20 per person, were used for test. The correct recognition rate was 100% for only the front-view test facial images, which include the images either with spectacles or of various expressions, while it was 90% in average for the total input images that include rotated faces. We analyzed the effect of nonlinear functon that determine the sparseness, and compared recognition rate using the sparese population code with that using eigenvectors (eigenfaces), which is compact code that makes contrast with the sparse population code.

  • PDF

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Discriminative Effects of Social Skills Training on Facial Emotion Recognition among Children with Attention-Deficit/Hyperactivity Disorder and Autism Spectrum Disorder

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.29 no.4
    • /
    • pp.150-160
    • /
    • 2018
  • Objectives: This study investigated the effect of social skills training (SST) on facial emotion recognition and discrimination in children with attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Twenty-three children aged 7 to 10 years participated in our SST. They included 15 children diagnosed with ADHD and 8 with ASD. The participants' parents completed the Korean version of the Child Behavior Checklist (K-CBCL), the ADHD Rating Scale, and Conner's Scale at baseline and post-treatment. The participants completed the Korean Wechsler Intelligence Scale for Children-IV (K-WISC-IV) and the Advanced Test of Attention at baseline and the Penn Emotion Recognition and Discrimination Task at baseline and post-treatment. Results: No significant changes in facial emotion recognition and discrimination occurred in either group before and after SST. However, when controlling for the processing speed of K-WISC and the social subscale of K-CBCL, the ADHD group showed more improvement in total (p=0.049), female (p=0.039), sad (p=0.002), mild (p=0.015), female extreme (p=0.005), male mild (p=0.038), and Caucasian (p=0.004) facial expressions than did the ASD group. Conclusion: SST improved facial expression recognition for children with ADHD more effectively than it did for children with ASD, in whom additional training to help emotion recognition and discrimination is needed.

Study of Facial Expression Recognition using Variable-sized Block (가변 크기 블록(Variable-sized Block)을 이용한 얼굴 표정 인식에 관한 연구)

  • Cho, Youngtak;Ryu, Byungyong;Chae, Oksam
    • Convergence Security Journal
    • /
    • v.19 no.1
    • /
    • pp.67-78
    • /
    • 2019
  • Most existing facial expression recognition methods use a uniform grid method that divides the entire facial image into uniform blocks when describing facial features. The problem of this method may include non-face backgrounds, which interferes with discrimination of facial expressions, and the feature of a face included in each block may vary depending on the position, size, and orientation of the face in the input image. In this paper, we propose a variable-size block method which determines the size and position of a block that best represents meaningful facial expression change. As a part of the effort, we propose the way to determine the optimal number, position and size of each block based on the facial feature points. For the evaluation of the proposed method, we generate the facial feature vectors using LDTP and construct a facial expression recognition system based on SVM. Experimental results show that the proposed method is superior to conventional uniform grid based method. Especially, it shows that the proposed method can adapt to the change of the input environment more effectively by showing relatively better performance than exiting methods in the images with large shape and orientation changes.

Person-Independent Facial Expression Recognition with Histograms of Prominent Edge Directions

  • Makhmudkhujaev, Farkhod;Iqbal, Md Tauhid Bin;Arefin, Md Rifat;Ryu, Byungyong;Chae, Oksam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6000-6017
    • /
    • 2018
  • This paper presents a new descriptor, named Histograms of Prominent Edge Directions (HPED), for the recognition of facial expressions in a person-independent environment. In this paper, we raise the issue of sampling error in generating the code-histogram from spatial regions of the face image, as observed in the existing descriptors. HPED describes facial appearance changes based on the statistical distribution of the top two prominent edge directions (i.e., primary and secondary direction) captured over small spatial regions of the face. Compared to existing descriptors, HPED uses a smaller number of code-bins to describe the spatial regions, which helps avoid sampling error despite having fewer samples while preserving the valuable spatial information. In contrast to the existing Histogram of Oriented Gradients (HOG) that uses the histogram of the primary edge direction (i.e., gradient orientation) only, we additionally consider the histogram of the secondary edge direction, which provides more meaningful shape information related to the local texture. Experiments on popular facial expression datasets demonstrate the superior performance of the proposed HPED against existing descriptors in a person-independent environment.

An Efficient Facial Expression Recognition by Measuring Histogram Distance Based on Preprocessing (전처리 기반 히스토그램 거리측정에 의한 효율적인 표정인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.5
    • /
    • pp.667-673
    • /
    • 2009
  • This paper presents an efficient facial expression recognition method by measuring the histogram distance based on preprocessing. The preprocessing that uses both centroid shift and histogram equalization is applied to improve the recognition performance, The distance measurement is also applied to estimate the similarity between the facial expressions. The centroid shift based on the first moment balance technique is applied not only to obtain the robust recognition with respect to position or size variations but also to reduce the distance measurement load by excluding the background in the recognition. Histogram equalization is used for robustly recognizing the poor contrast of the images due to light intensity. The proposed method has been applied for recognizing 72 facial expression images(4 persons * 18 scenes) of 320*243 pixels. Three distances such as city-block, Euclidean, and ordinal are used as a similarity measure between histograms. The experimental results show that the proposed method has superior recognition performances compared with the method without preprocessing. The ordinal distance shows superior recognition performances over city-block and Euclidean distances, respectively.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

Analysis of children's Reaction in Facial Expression of Emotion (얼굴표정에서 나타나는 감정표현에 대한 어린이의 반응분석)

  • Yoo, Dong-Kwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.70-80
    • /
    • 2013
  • The purpose of this study has placed its meaning in the use as the basic material for the research of the person's facial expressions, by researching and analyzing the visual reactions of recognition of children according to the facial expressions of emotion and by surveying the verbal reactions of boys and girls according to the individual expressions of emotion. The subjects of this study were 108 children at the age of 6 - 8 (55 males, 53 females) who were able to understand the presented research tool, and the response survey conducted twice were used in the method of data collection by individual interviews and self administered questionnaires. The research tool using in the questionnaires were classified into 6 types of joy, sadness, anger, surprise, disgust, and fear which could derive the specific and accurate responses. Regarding children's visual reactions of recognition, both of boys and girls showed the high frequency in the facial expressions of joy, sadness, anger, surprise, and the low frequency in fear, disgust. Regarding verbal reactions, it showed the high frequency in the heuristic responses either to explore or the responds to the impressive parts reminiscent to the facial appearances in all the joy, sadness, anger, surprise, disgust, fear. And it came out that the imaginary responses created new stories reminiscent to the facial expression in surprise, disgust, and fear.

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.