• Title/Summary/Keyword: Facial expression recognition

Search Result 283, Processing Time 0.026 seconds

Facial expression recognition-based contents preference inference system (얼굴 표정 인식 기반 컨텐츠 선호도 추론 시스템)

  • Lee, Yeon-Gon;Cho, Durkhyun;Jang, Jun Ik;Suh, Il Hong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2013.01a
    • /
    • pp.201-204
    • /
    • 2013
  • 디지털 컨텐츠의 종류와 양이 폭발적으로 증가하면서 컨텐츠 선호도 투표는 강한 파급력을 지니게 되었다. 하지만 컨텐츠 소비자가 직접 투표를 해야 하는 현재의 방법은 사람들의 투표 참여율이 저조하며, 조작 위험성이 높다는 문제점이 있다. 이에 본 논문에서는 컨텐츠 소비자의 얼굴 표정에 드러나는 감정을 인식함으로써 자동으로 컨텐츠 선호도를 추론하는 시스템을 제안한다. 본 논문에서 제안하는 시스템은 기존의 수동 컨텐츠 선호도 투표 시스템의 문제점인 컨텐츠 소비자의 부담감과 번거로움, 조작 위험성 등을 해소함으로써 보다 편리하고 효율적이며 신뢰도 높은 서비스를 제공하는 것을 목표로 한다. 따라서 본 논문에서는 컨텐츠 선호도 추론 시스템을 구축하기 위한 방법을 구체적으로 제안하고, 실험을 통하여 제안하는 시스템의 실용성과 효율성을 보인다.

  • PDF

Detection of Face Expression Based on Deep Learning (딥러닝 기반의 얼굴영상에서 표정 검출에 관한 연구)

  • Won, Chulho;Lee, Bub-ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.917-924
    • /
    • 2018
  • Recently, researches using LBP and SVM have been performed as one of the image - based methods for facial emotion recognition. LBP, introduced by Ojala et al., is widely used in the field of image recognition due to its high discrimination of objects, robustness to illumination change, and simple operation. In addition, CS(Center-Symmetric)-LBP was used as a modified form of LBP, which is widely used for face recognition. In this paper, we propose a method to detect four facial expressions such as expressionless, happiness, surprise, and anger using deep neural network. The validity of the proposed method is verified using accuracy. Based on the existing LBP feature parameters, it was confirmed that the method using the deep neural network is superior to the method using the Adaboost and SVM classifier.

Discriminative Effects of Social Skills Training on Facial Emotion Recognition among Children with Attention-Deficit/Hyperactivity Disorder and Autism Spectrum Disorder

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.29 no.4
    • /
    • pp.150-160
    • /
    • 2018
  • Objectives: This study investigated the effect of social skills training (SST) on facial emotion recognition and discrimination in children with attention-deficit/hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Twenty-three children aged 7 to 10 years participated in our SST. They included 15 children diagnosed with ADHD and 8 with ASD. The participants' parents completed the Korean version of the Child Behavior Checklist (K-CBCL), the ADHD Rating Scale, and Conner's Scale at baseline and post-treatment. The participants completed the Korean Wechsler Intelligence Scale for Children-IV (K-WISC-IV) and the Advanced Test of Attention at baseline and the Penn Emotion Recognition and Discrimination Task at baseline and post-treatment. Results: No significant changes in facial emotion recognition and discrimination occurred in either group before and after SST. However, when controlling for the processing speed of K-WISC and the social subscale of K-CBCL, the ADHD group showed more improvement in total (p=0.049), female (p=0.039), sad (p=0.002), mild (p=0.015), female extreme (p=0.005), male mild (p=0.038), and Caucasian (p=0.004) facial expressions than did the ASD group. Conclusion: SST improved facial expression recognition for children with ADHD more effectively than it did for children with ASD, in whom additional training to help emotion recognition and discrimination is needed.

Detection of Face and Facial Features in Complex Background from Color Images (복잡한 배경의 칼라영상에서 Face and Facial Features 검출)

  • 김영구;노진우;고한석
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.69-72
    • /
    • 2002
  • Human face detection has many applications such as face recognition, face or facial feature tracking, pose estimation, and expression recognition. We present a new method for automatically segmentation and face detection in color images. Skin color alone is usually not sufficient to detect face, so we combine the color segmentation and shape analysis. The algorithm consists of two stages. First, skin color regions are segmented based on the chrominance component of the input image. Then regions with elliptical shape are selected as face hypotheses. They are certificated to searching for the facial features in their interior, Experimental results demonstrate successful detection over a wide variety of facial variations in scale, rotation, pose, lighting conditions.

  • PDF

Transfer Learning for Face Emotions Recognition in Different Crowd Density Situations

  • Amirah Alharbi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.26-34
    • /
    • 2024
  • Most human emotions are conveyed through facial expressions, which represent the predominant source of emotional data. This research investigates the impact of crowds on human emotions by analysing facial expressions. It examines how crowd behaviour, face recognition technology, and deep learning algorithms contribute to understanding the emotional change according to different level of crowd. The study identifies common emotions expressed during congestion, differences between crowded and less crowded areas, changes in facial expressions over time. The findings can inform urban planning and crowd event management by providing insights for developing coping mechanisms for affected individuals. However, limitations and challenges in using reliable facial expression analysis are also discussed, including age and context-related differences.

Song Player by Distance Measurement from Face (얼굴에서 거리 측정에 의한 노래 플레이어)

  • Shin, Seong-Yoon;Lee, Min-Hye;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.667-669
    • /
    • 2022
  • In this paper, Face Song Player, which is a system that recognizes the facial expression of an individual and plays music that is appropriate for such person, is presented. It studies information on the facial contour lines and extracts an average, and acquires the facial shape information. MUCT DB was used as the DB for learning. For the recognition of facial expression, an algorithm was designed by using the differences in the characteristics of each of the expressions on the basis of expressionless images.

  • PDF

Development of Emotional Feature Extraction Method based on Advanced AAM (Advanced AAM 기반 정서특징 검출 기법 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.834-839
    • /
    • 2009
  • It is a key element that the problem of emotional feature extraction based on facial image to recognize a human emotion status. In this paper, we propose an Advanced AAM that is improved version of proposed Facial Expression Recognition Systems based on Bayesian Network by using FACS and AAM. This is a study about the most efficient method of optimal facial feature area for human emotion recognition about random user based on generalized HCI system environments. In order to perform such processes, we use a Statistical Shape Analysis at the normalized input image by using Advanced AAM and FACS as a facial expression and emotion status analysis program. And we study about the automatical emotional feature extraction about random user.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Gender Differences in Empathic Ability and Facial Emotion Recognition of Schizophrenic Patients (성별에 따른 조현병 환자의 공감 능력 및 얼굴 정서 인식 능력의 차이)

  • Kim, Ki-Chang;Son, Jung-Woo;Ghim, Hei-Rhee;Lee, Sang-Ick;Shin, Chul-Gin;Kim, Sie-Kyeong;Ju, Gawon;Eom, Jin-Sup;Jung, Myung-Sook;Park, Min;Moon, Eunok;Cheon, Young-Un
    • Korean Journal of Biological Psychiatry
    • /
    • v.21 no.1
    • /
    • pp.21-27
    • /
    • 2014
  • Objectives The aim of the present study was to investigate gender difference in empathic ability and recognition of facial emotion expression in schizophrenic patients. Methods Twenty-two schizophrenic outpatients (11 men and 11 women) and controls (10 men and 12 women) performed both the scale of Empathic Quotient (EQ) and facial emotion recognition test. We compared the scores of EQ and the facial emotion recognition test among each group according to diagnosis and gender. Results We found a significant sex difference in the scores of EQ and the facial emotion recognition test in the schizophrenic patients. And there were significantly negative correlation between the score of the facial emotion recognition test and the scores of Positive and Negative Symptom Scale (PANSS) in female schizophrenic patients. However, in male schizophrenic patients, there were no significant correlations between the score of each test and the scores of PANSS. Conclusions This study suggests that the sex difference in empathic ability and facial emotion recognition would be very important in chronic schizophrenic patients. Investigation of sex effects in empathic ability and facial emotion recognition in chronic schizophrenic patients would present an important solution for constructing optimal rehabilitation program.