• Title/Summary/Keyword: Facial emotion

Search Result 309, Processing Time 0.032 seconds

Difference of Facial Emotion Recognition and Discrimination between Children with Attention-Deficit Hyperactivity Disorder and Autism Spectrum Disorder (주의력결핍과잉행동장애 아동과 자폐스펙트럼장애 아동에서 얼굴 표정 정서 인식과 구별의 차이)

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.27 no.3
    • /
    • pp.207-215
    • /
    • 2016
  • Objectives: This study aimed to investigate the differences in the facial emotion recognition and discrimination ability between children with attention-deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Fifty-three children aged 7 to 11 years participated in this study. Among them, 43 were diagnosed with ADHD and 10 with ASD. The parents of the participants completed the Korean version of the Child Behavior Checklist, ADHD Rating Scale and Conner's scale. The participants completed the Korean Wechsler Intelligence Scale for Children-fourth edition and Advanced Test of Attention (ATA), Penn Emotion Recognition Task and Penn Emotion Discrimination Task. The group differences in the facial emotion recognition and discrimination ability were analyzed by using analysis of covariance for the purpose of controlling the visual omission error index of ATA. Results: The children with ADHD showed better recognition of happy and sad faces and less false positive neutral responses than those with ASD. Also, the children with ADHD recognized emotions better than those with ASD on female faces and in extreme facial expressions, but not on male faces or in mild facial expressions. We found no differences in the facial emotion discrimination between the children with ADHD and ASD. Conclusion: Our results suggest that children with ADHD recognize facial emotions better than children with ASD, but they still have deficits. Interventions which consider their different emotion recognition and discrimination abilities are needed.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Emotion Training: Image Color Transfer with Facial Expression and Emotion Recognition (감정 트레이닝: 얼굴 표정과 감정 인식 분석을 이용한 이미지 색상 변환)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.1-9
    • /
    • 2018
  • We propose an emotional training framework that can determine the initial symptom of schizophrenia by using emotional analysis method through facial expression change. We use Emotion API in Microsoft to obtain facial expressions and emotion values at the present time. We analyzed these values and recognized subtle facial expressions that change with time. The emotion states were classified according to the peak analysis-based variance method in order to measure the emotions appearing in facial expressions according to time. The proposed method analyzes the lack of emotional recognition and expressive ability by using characteristics that are different from the emotional state changes classified according to the six basic emotions proposed by Ekman. As a result, the analyzed values are integrated into the image color transfer framework so that users can easily recognize and train their own emotional changes.

Research on Micro-Movement Responses of Facial Muscles by Intimacy, Empathy, Valence (친밀도, 공감도, 긍정도에 따른 얼굴 근육의 미세움직임 반응 차이)

  • Cho, Ji Eun;Park, Sang-In;Won, Myoung Ju;Park, Min Ji;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.2
    • /
    • pp.439-448
    • /
    • 2017
  • Facial expression is important factor on social interaction. Facial muscle movement provides emotion information to develop social network. However, facial movement has less determined to recognize social emotion. This study is to analyze facial micro-movements and to recognize the social emotion such as intimacy, empathy, and valence. 76 university students were presented to the stimuli for social emotions and was measure their facial expression using camera. As a results, facial micro-movement. showed significant difference of social emotion. After extracting the movement amount of 3 unconscious muscles and 18 conscious muscles, Dominant Frequency band was confirmed. While muscle around the nose and cheek showed significant difference in the intimacy, one around mouth did in the empathy and one around jaw in the valence. The results proposed new facial movement to express social emotion in virtual avatars and to recognize social emotion.

Difficulty in Facial Emotion Recognition in Children with ADHD (주의력결핍 과잉행동장애의 이환 여부에 따른 얼굴표정 정서 인식의 차이)

  • An, Na Young;Lee, Ju Young;Cho, Sun Mi;Chung, Young Ki;Shin, Yun Mi
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.24 no.2
    • /
    • pp.83-89
    • /
    • 2013
  • Objectives : It is known that children with attention-deficit hyperactivity disorder (ADHD) experience significant difficulty in recognizing facial emotion, which involves processing of emotional facial expressions rather than speech, compared to children without ADHD. This objective of this study is to investigate the differences in facial emotion recognition between children with ADHD and normal children used as control. Methods : The children for our study were recruited from the Suwon Project, a cohort comprising a non-random convenience sample of 117 nine-year-old ethnic Koreans. The parents of the study participants completed study questionnaires such as the Korean version of Child Behavior Checklist, ADHD Rating Scale, Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version. Facial Expression Recognition Test of the Emotion Recognition Test was used for the evaluation of facial emotion recognition and ADHD Rating Scale was used for the assessment of ADHD. Results : ADHD children (N=10) were found to have impaired recognition when it comes to Emotional Differentiation and Contextual Understanding compared with normal controls (N=24). We found no statistically significant difference in the recognition of positive facial emotions (happy and surprise) and negative facial emotions (anger, sadness, disgust and fear) between the children with ADHD and normal children. Conclusion : The results of our study suggested that facial emotion recognition may be closely associated with ADHD, after controlling for covariates, although more research is needed.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Difference in reading facial expressions as the empathy-systemizing type - focusing on emotional recognition and emotional discrimination - (공감-체계화 유형에 따른 얼굴 표정 읽기의 차이 - 정서읽기와 정서변별을 중심으로 -)

  • Tae, Eun-Ju;Cho, Kyung-Ja;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee
    • Science of Emotion and Sensibility
    • /
    • v.11 no.4
    • /
    • pp.613-628
    • /
    • 2008
  • Mind reading is an essential part of normal social functioning and empathy plays a key role in social understanding. This study investigated how individual differences can have an effect on reading emotions in facial expressions, focusing on empathizing and systemizing. Two experiments were conducted. In study 1, participants performed emotion recognition test using facial expressions to investigate how emotion recognition can be different as empathy-systemizing type, facial areas, and emotion type. Study 2 examined how emotion recognition can be different as empathy-systemizing type, facial areas, and emotion type. An emotion discrimination test was used instead, with every other condition the same as in studies 1. Results from study 2 showed mostly same results as study 1: there were significant differences among facial areas and emotion type and also have an interaction effect between facial areas and emotion type. On the other hand, there was an interaction effect between empathy-systemizing type and emotion type in study 2. That is, how much people empathize and systemize can make difference in emotional discrimination. These results suggested that the empathy-systemizing type was more appropriate to explain emotion discrimination than emotion recognition.

  • PDF

Effects of the facial expression presenting types and facial areas on the emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.113-125
    • /
    • 2007
  • The aim of the experimental studies described in this paper is to investigate the effects of the face/eye/mouth areas using dynamic facial expressions and static facial expressions on emotional recognition. Using seven-seconds-displays, experiment 1 for basic emotions and experiment 2 for complex emotions are executed. The results of two experiments supported that the effects of dynamic facial expressions are higher than static one on emotional recognition and indicated the higher emotional recognition effects of eye area on dynamic images than mouth area. These results suggest that dynamic properties should be considered in emotional study with facial expressions for not only basic emotions but also complex emotions. However, we should consider the properties of emotion because each emotion did not show the effects of dynamic image equally. Furthermore, this study let us know which facial area shows emotional states more correctly is according to the feature emotion.

  • PDF

The Accuracy of Recognizing Emotion From Korean Standard Facial Expression (한국인 표준 얼굴 표정 이미지의 감성 인식 정확률)

  • Lee, Woo-Ri;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.9
    • /
    • pp.476-483
    • /
    • 2014
  • The purpose of this study was to make a suitable images for korean emotional expressions. KSFI(Korean Standard Facial Image)-AUs was produced from korean standard apperance and FACS(Facial Action coding system)-AUs. For the objectivity of KSFI, the survey was examined about emotion recognition rate and contribution of emotion recognition in facial elements from six-basic emotional expression images(sadness, happiness, disgust, fear, anger and surprise). As a result of the experiment, the images of happiness, surprise, sadness and anger which had shown higher accuracy. Also, emotional recognition rate was mainly decided by the facial element of eyes and a mouth. Through the result of this study, KSFI contents which could be combined AU images was proposed. In this future, KSFI would be helpful contents to improve emotion recognition rate.