• Title/Summary/Keyword: facial expression analysis

Search Result 164, Processing Time 0.028 seconds

Children's Interpretation of Facial Expression onto Two-Dimension Structure of Emotion (정서의 이차원 구조에서 유아의 얼굴표정 해석)

  • Shin, Young-Suk;Chung, Hyun-Sook
    • Korean Journal of Cognitive Science
    • /
    • v.18 no.1
    • /
    • pp.57-68
    • /
    • 2007
  • This study explores children's categories of emotion understanding from facial expressions onto two dimensional structure of emotion. Children of 89 from 3 to 5 years old were required to those facial expressions related the fourteen emotion terms. Facial expressions applied for experiment are used the photographs rated the degree of expression in each of the two dimensions (pleasure-displeasure dimension and arousal-sleep dimension) on a nine-point scale from 54 university students. The experimental results showed that children indicated the greater stability in arousal dimension than stability in pleasure-displeasure dimension. Emotions about sadness, sleepiness, anger and surprise onto two dimensions was understand very well, but emotions about fear, boredom were showed instability in pleasure-displeasure dimension. Specifically, 3 years old children indicated highly the perception in a degree of arousal-sleep than perception of pleasure-displeasure.

  • PDF

Detection of Face and Facial Features in Complex Background from Color Images (복잡한 배경의 칼라영상에서 Face and Facial Features 검출)

  • 김영구;노진우;고한석
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.69-72
    • /
    • 2002
  • Human face detection has many applications such as face recognition, face or facial feature tracking, pose estimation, and expression recognition. We present a new method for automatically segmentation and face detection in color images. Skin color alone is usually not sufficient to detect face, so we combine the color segmentation and shape analysis. The algorithm consists of two stages. First, skin color regions are segmented based on the chrominance component of the input image. Then regions with elliptical shape are selected as face hypotheses. They are certificated to searching for the facial features in their interior, Experimental results demonstrate successful detection over a wide variety of facial variations in scale, rotation, pose, lighting conditions.

  • PDF

Transfer Learning for Face Emotions Recognition in Different Crowd Density Situations

  • Amirah Alharbi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.26-34
    • /
    • 2024
  • Most human emotions are conveyed through facial expressions, which represent the predominant source of emotional data. This research investigates the impact of crowds on human emotions by analysing facial expressions. It examines how crowd behaviour, face recognition technology, and deep learning algorithms contribute to understanding the emotional change according to different level of crowd. The study identifies common emotions expressed during congestion, differences between crowded and less crowded areas, changes in facial expressions over time. The findings can inform urban planning and crowd event management by providing insights for developing coping mechanisms for affected individuals. However, limitations and challenges in using reliable facial expression analysis are also discussed, including age and context-related differences.

CREATING JOYFUL DIGESTS BY EXPLOITING SMILE/LAUGHTER FACIAL EXPRESSIONS PRESENT IN VIDEO

  • Kowalik, Uwe;Hidaka, Kota;Irie, Go;Kojima, Akira
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.267-272
    • /
    • 2009
  • Video digests provide an effective way of confirming a video content rapidly due to their very compact form. By watching a digest, users can easily check whether a specific content is worth seeing in full. The impression created by the digest greatly influences the user's choice in selecting video contents. We propose a novel method of automatic digest creation that evokes a joyful impression through the created digest by exploiting smile/laughter facial expressions as emotional cues of joy from video. We assume that a digest presenting smiling/laughing faces appeals to the user since he/she is assured that the smile/laughter expression is caused by joyful events inside the video. For detecting smile/laughter faces we have developed a neural network based method for classifying facial expressions. Video segmentation is performed by automatic shot detection. For creating joyful digests, appropriate shots are automatically selected by shot ranking based on the smile/laughter detection result. We report the results of user trials conducted for assessing the visual impression with automatically created 'joyful' digests produced by our system. The results show that users tend to prefer emotional digests containing laughter faces. This result suggests that the attractiveness of automatically created video digests can be improved by extracting emotional cues of the contents through automatic facial expression analysis as proposed in this paper.

  • PDF

An Emotion Recognition Method using Facial Expression and Speech Signal (얼굴표정과 음성을 이용한 감정인식)

  • 고현주;이대종;전명근
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.6
    • /
    • pp.799-807
    • /
    • 2004
  • In this paper, we deal with an emotion recognition method using facial images and speech signal. Six basic human emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Emotion recognition using the facial expression is performed by using a multi-resolution analysis based on the discrete wavelet transform. And then, the feature vectors are extracted from the linear discriminant analysis method. On the other hand, the emotion recognition from speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and then the final recognition is obtained from a multi-decision making scheme.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

Analysis of children's Reaction in Facial Expression of Emotion (얼굴표정에서 나타나는 감정표현에 대한 어린이의 반응분석)

  • Yoo, Dong-Kwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.70-80
    • /
    • 2013
  • The purpose of this study has placed its meaning in the use as the basic material for the research of the person's facial expressions, by researching and analyzing the visual reactions of recognition of children according to the facial expressions of emotion and by surveying the verbal reactions of boys and girls according to the individual expressions of emotion. The subjects of this study were 108 children at the age of 6 - 8 (55 males, 53 females) who were able to understand the presented research tool, and the response survey conducted twice were used in the method of data collection by individual interviews and self administered questionnaires. The research tool using in the questionnaires were classified into 6 types of joy, sadness, anger, surprise, disgust, and fear which could derive the specific and accurate responses. Regarding children's visual reactions of recognition, both of boys and girls showed the high frequency in the facial expressions of joy, sadness, anger, surprise, and the low frequency in fear, disgust. Regarding verbal reactions, it showed the high frequency in the heuristic responses either to explore or the responds to the impressive parts reminiscent to the facial appearances in all the joy, sadness, anger, surprise, disgust, fear. And it came out that the imaginary responses created new stories reminiscent to the facial expression in surprise, disgust, and fear.

The Effect of Cognitive Movement Therapy on Emotional Rehabilitation for Children with Affective and Behavioral Disorder Using Emotional Expression and Facial Image Analysis (감정표현 표정의 영상분석에 의한 인지동작치료가 정서·행동장애아 감성재활에 미치는 영향)

  • Byun, In-Kyung;Lee, Jae-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.12
    • /
    • pp.327-345
    • /
    • 2016
  • The purpose of this study was to carry out cognitive movement therapy program for children with affective and behavioral disorder based on neuro science, psychology, motor learning, muscle physiology, biomechanics, human motion analysis, movement control and to quantify characteristic of expression and gestures according to change of facial expression by emotional change. We could observe problematic expression of children with affective disorder, and could estimate the efficiency of application of movement therapy program by the face expression change of children with affective disorder. And it could be expected to accumulate data for early detection and therapy process of development disorder applying converged measurement and analytic method for human development by quantification of emotion and behavior therapy analysis, kinematic analysis. Therefore, the result of this study could be extendedly applied to the disabled, the elderly and the sick as well as children.

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.