• Title/Summary/Keyword: Facial expressions

Search Result 323, Processing Time 0.027 seconds

Effect of Depressive Mood on Identification of Emotional Facial Expression (우울감이 얼굴 표정 정서 인식에 미치는 영향)

  • Ryu, Kyoung-Hi;Oh, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.11 no.1
    • /
    • pp.11-21
    • /
    • 2008
  • This study was designed to examine the effect of depressive mood on identification of emotional facial expression. Participants were screened out of 305 college students on the basis of the BDI-II score. Students with BDI-II score higher than 14(upper 20%) were selected for the Depression Group and those with BDI-II score lower than 5(lower 20%) were selected for the Control Group. A final sample of 20 students in the Depression Group and 20 in the Control Group were presented with facial expression stimuli of an increasing degree of emotional intensity, slowly changing from a neutral to a full intensity of happy, sad, angry, or fearful expressions. The result showed that there was the significant interaction of Group by Emotion(esp. happy and sad) which suggested that depressive mood affects processing of emotional stimuli such as facial expressions. Implication of this result for mood-congruent information processing were discussed.

  • PDF

Analysis of Understanding Using Deep Learning Facial Expression Recognition for Real Time Online Lectures (딥러닝 표정 인식을 활용한 실시간 온라인 강의 이해도 분석)

  • Lee, Jaayeon;Jeong, Sohyun;Shin, You Won;Lee, Eunhye;Ha, Yubin;Choi, Jang-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1464-1475
    • /
    • 2020
  • Due to the spread of COVID-19, the online lecture has become more prevalent. However, it was found that a lot of students and professors are experiencing lack of communication. This study is therefore designed to improve interactive communication between professors and students in real-time online lectures. To do so, we explore deep learning approaches for automatic recognition of students' facial expressions and classification of their understanding into 3 classes (Understand / Neutral / Not Understand). We use 'BlazeFace' model for face detection and 'ResNet-GRU' model for facial expression recognition (FER). We name this entire process 'Degree of Understanding (DoU)' algorithm. DoU algorithm can analyze a multitude of students collectively and present the result in visualized statistics. To our knowledge, this study has great significance in that this is the first study offers the statistics of understanding in lectures using FER. As a result, the algorithm achieved rapid speed of 0.098sec/frame with high accuracy of 94.3% in CPU environment, demonstrating the potential to be applied to real-time online lectures. DoU Algorithm can be extended to various fields where facial expressions play important roles in communications such as interactions with hearing impaired people.

Effects of Working Memory Load on Negative Facial Emotion Processing: an ERP study (작업기억 부담이 부적 얼굴정서 처리에 미치는 영향: ERP 연구)

  • Park, Taejin;Kim, Junghee
    • Korean Journal of Cognitive Science
    • /
    • v.29 no.1
    • /
    • pp.39-59
    • /
    • 2018
  • To elucidate the effect of working memory (WM) load on negative facial emotion processing, we examined ERP components (P1 and N170) elicited by fearful and neutral expressions each of which was presented during 0-back (low-WM load) or 2-back (high-WM load) tasks. During N-back tasks, visual objects were presented one by one as targets and each of facial expressions was presented as a passively observed stimulus during intervals between targets. Behavioral results showed more accurate and fast responses at low-WM load condition compared to high-WM load condition. Analysis of mean amplitudes of P1 on the occipital region showed significant WM load effect (high-WM load > low-WM load) but showed nonsignificant facial emotion effect. Analysis of mean amplitudes of N170 on the posterior occipito-temporal region showed significant overall facial emotion effect (fearful > neutral), but, in detail, significant facial emotion effect was observed only at low-WM load condition on the left hemisphere, but was observed at high-WM load condition as well as low-WM load condition on the right hemisphere. To summarize, facial emotion effect observed by N170 amplitudes was modulated by WM load only on the left hemisphere. These results show that early emotional processing of negative facial expression could be eliminated or reduced by high load of WM on the left hemisphere, but could not be eliminated by high load on the right hemisphere, and suggest right hemispheric lateralization of negative facial emotion processing.

Accurate Visual Working Memory under a Positive Emotional Expression in Face (얼굴표정의 긍정적 정서에 의한 시각작업기억 향상 효과)

  • Han, Ji-Eun;Hyun, Joo-Seok
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.605-616
    • /
    • 2011
  • The present study examined memory accuracy for faces with positive, negative and neutral emotional expressions to test whether their emotional content can affect visual working memory (VWM) performance. Participants remembered a set of face pictures in which facial expressions of the faces were randomly assigned from pleasant, unpleasant and neutral emotional categories. Participants' task was to report presence or absence of an emotion change in the faces by comparing the remembered set against another set of test faces displayed after a short delay. The change detection accuracies of the pleasant, unpleasant and neutral face conditions were compared under two memory exposure duration of 500ms vs. 1000ms. Under the duration of 500ms, the accuracy in the pleasant condition was higher than both unpleasant and neutral conditions. However the difference disappeared when the duration was extended to 1000ms. The results indicate that a positive facial expression can improve VWM accuracy relative to the negative or positive expressions especially when there is not enough time for forming durable VWM representations.

  • PDF

Development of Content for the Robot that Relieves Depression in the Elderly Using Music Therapy (음악요법을 이용한 노인의 우울증 완화 로봇 'BOOGI'의 콘텐츠 개발)

  • Jung, Yu-Hwa;Jeong, Seong-Won
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.74-85
    • /
    • 2015
  • The positive effect of percussion instruments can induce increases in self-esteem and decreases in depression in the elderly. Based on this, the content for a percussion instrument robot that the elderly can use to play music is developed. The elements of the interaction between the elderly and the robot through the robot content are extracted. Music that arouses positive memories in the elderly is selected as part of the music therapy robot content in order to relieve depression, and a scoring system for playing music is constructed. In addition, the interaction components of the robot's facial expressions, which stimulate emotions and sensitivity in the elderly, are also designed. These components enable the elderly to take an active part in using the instrument to change the robot's facial expressions, which have three degrees of emotion: neutral-happy, happy, and very happy. The robot is not only a music game machine: it also maximizes the relief of depression in the elderly through interactions with the robot that allow the elderly person to listen to what the robot plays and through the elderly person becoming involved and playing music along with the robot.

The Accuracy of Recognizing Emotion From Korean Standard Facial Expression (한국인 표준 얼굴 표정 이미지의 감성 인식 정확률)

  • Lee, Woo-Ri;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.9
    • /
    • pp.476-483
    • /
    • 2014
  • The purpose of this study was to make a suitable images for korean emotional expressions. KSFI(Korean Standard Facial Image)-AUs was produced from korean standard apperance and FACS(Facial Action coding system)-AUs. For the objectivity of KSFI, the survey was examined about emotion recognition rate and contribution of emotion recognition in facial elements from six-basic emotional expression images(sadness, happiness, disgust, fear, anger and surprise). As a result of the experiment, the images of happiness, surprise, sadness and anger which had shown higher accuracy. Also, emotional recognition rate was mainly decided by the facial element of eyes and a mouth. Through the result of this study, KSFI contents which could be combined AU images was proposed. In this future, KSFI would be helpful contents to improve emotion recognition rate.

Image-based Realistic Facial Expression Animation

  • Yang, Hyun-S.;Han, Tae-Woo;Lee, Ju-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, we propose a method of image-based three-dimensional modeling for realistic facial expression. In the proposed method, real human facial images are used to deform a generic three-dimensional mesh model and the deformed model is animated to generate facial expression animation. First, we take several pictures of the same person from several view angles. Then we project a three-dimensional face model onto the plane of each facial image and match the projected model with each image. The results are combined to generate a deformed three-dimensional model. We use the feature-based image metamorphosis to match the projected models with images. We then create a synthetic image from the two-dimensional images of a specific person's face. This synthetic image is texture-mapped to the cylindrical projection of the three-dimensional model. We also propose a muscle-based animation technique to generate realistic facial expression animations. This method facilitates the control of the animation. lastly, we show the animation results of the six represenative facial expressions.

Study for Classification of Facial Expression using Distance Features of Facial Landmarks (얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구)

  • Bae, Jin Hee;Wang, Bo Hyeon;Lim, Joon S.
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.613-618
    • /
    • 2021
  • Facial expression recognition has long been established as a subject of continuous research in various fields. In this paper, the relationship between each landmark is analyzed using the features obtained by calculating the distance between the facial landmarks in the image, and five facial expressions are classified. We increased data and label reliability based on our labeling work with multiple observers. In addition, faces were recognized from the original data and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that are relatively more helpful for classification. We performed facial recognition classification and analysis with the method proposed in this paper, which shows the validity and effectiveness of the proposed method.

A Comic Facial Expression Using Cheeks and Jaws Movements for Intelligent Avatar Communications (지적 아바타 통신에서 볼과 턱 움직임을 사용한 코믹한 얼굴 표정)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.121-124
    • /
    • 2001
  • In this paper, a method of generating the facial gesture CG animation on different avatar models is provided. At first, to edit emotional expressions efficiently, regeneration of the comic expression on different polygonal mesh models is carried out, where the movements of the cheeks and numerical methods. Experimental results show a possibility that the method could be used for intelligent avatar communications between Korea and Japan.

  • PDF

Tracking of Facial Feature Points related to Facial Expressions (표정변화에 따른 얼굴 표정요소의 특징점 추적)

  • 최명근;정현숙;신영숙;이일병
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.425-427
    • /
    • 2000
  • 얼굴 표정은 사람의 감정을 표현함과 동시에 그것을 이해할 수 있는 중요한 수단이다. 최근 이러한 얼굴 표정의 자동인식과 추적을 위한 연구가 많이 진행되고 있다. 본 연구에서는 대략적인 얼굴영역을 설정하여 얼굴의 표정을 나타내는 표정요소들을 찾아낸 후, 각 요소의 특징점을 추출하고 추적하는 방법을 제시한다. 제안하는 시스템의 개요는 입력영상의 첫 프레임에서 얼굴영역 및 특징점을 찾고, 연속되는 프레임에서 반복적으로 이를 추적한다. 특징점 추출과 추적에는 템플릿 매칭과 Canny 경계선 검출기, Gabor 웨이블릿 변환을 사용하였다.

  • PDF