• 제목/요약/키워드: Facial Emotions

검색결과 159건 처리시간 0.026초

Transfer Learning for Face Emotions Recognition in Different Crowd Density Situations

  • Amirah Alharbi
    • International Journal of Computer Science & Network Security
    • /
    • 제24권4호
    • /
    • pp.26-34
    • /
    • 2024
  • Most human emotions are conveyed through facial expressions, which represent the predominant source of emotional data. This research investigates the impact of crowds on human emotions by analysing facial expressions. It examines how crowd behaviour, face recognition technology, and deep learning algorithms contribute to understanding the emotional change according to different level of crowd. The study identifies common emotions expressed during congestion, differences between crowded and less crowded areas, changes in facial expressions over time. The findings can inform urban planning and crowd event management by providing insights for developing coping mechanisms for affected individuals. However, limitations and challenges in using reliable facial expression analysis are also discussed, including age and context-related differences.

정서 상태와 얼굴표정간의 연결 능력의 발달 (Developmental Changes in Emotional-States and Facial Expression)

  • 박수진;송인혜;김혜리;조경자
    • 감성과학
    • /
    • 제10권1호
    • /
    • pp.127-133
    • /
    • 2007
  • 본 연구에서는 얼굴표정을 통하여 다른 사람의 정서 상태를 판단하는 능력이 연령(3세, 5세, 대학생), 성별(남, 여), 얼굴제시영역(얼굴전체, 눈), 정서의 종류(기본정서, 복합정서)에 따라 어떻게 다른지 알아보고자 하였다. 본 연구에서는 얼굴표정과 정서어휘 간의 연결이 비교적 분명하게 나타나는 32개의 정서 상태를 자극으로 사용하였으며, 표정사진은 32개의 정서 상태에 해당하는 얼굴표정을 배우에게 연기하도록 하여 사용하였다. 과제는 각 실험참가자에게 정서유발 상황에 대한 이야기를 들려주고 이야기 속의 주인공이 어떤 얼굴표정을 할 것인지를 판단하게 한 후 네 개의 얼굴표정 중에 적절한 것을 선택하도록 한 것이었다. 그 결과 연령이 증가함에 따라 얼굴표정을 판단하는 능력이 증가하였으며, 눈만 제시한 경우보다는 얼굴전체를 제시하였을 때, 복합정서보다는 기본정서에서 더 좋은 수행을 보였다. 또한 여자는 제시영역에 따른 수행의 차이가 없는 것에 반해, 남자는 눈 조건에 비해 얼굴조건의 경우에 더 좋은 수행을 보였다. 본 연구의 결과는 연령, 얼굴제시영역, 정서의 종류가 얼굴표정을 통해 타인의 정서를 판단하는데 영향을 줌을 시사한다.

  • PDF

감정 분류를 이용한 표정 연습 보조 인공지능 (Artificial Intelligence for Assistance of Facial Expression Practice Using Emotion Classification)

  • 김동규;이소화;봉재환
    • 한국전자통신학회논문지
    • /
    • 제17권6호
    • /
    • pp.1137-1144
    • /
    • 2022
  • 본 연구에서는 감정을 표현하기 위한 표정 연습을 보조하는 인공지능을 개발하였다. 개발한 인공지능은 서술형 문장과 표정 이미지로 구성된 멀티모달 입력을 심층신경망에 사용하고 서술형 문장에서 예측되는 감정과 표정 이미지에서 예측되는 감정 사이의 유사도를 계산하여 출력하였다. 사용자는 서술형 문장으로 주어진 상황에 맞게 표정을 연습하고 인공지능은 서술형 문장과 사용자의 표정 사이의 유사도를 수치로 출력하여 피드백한다. 표정 이미지에서 감정을 예측하기 위해 ResNet34 구조를 사용하였으며 FER2013 공공데이터를 이용해 훈련하였다. 자연어인 서술형 문장에서 감정을 예측하기 위해 KoBERT 모델을 전이학습 하였으며 AIHub의 감정 분류를 위한 대화 음성 데이터 세트를 사용해 훈련하였다. 표정 이미지에서 감정을 예측하는 심층신경망은 65% 정확도를 달성하여 사람 수준의 감정 분류 능력을 보여주었다. 서술형 문장에서 감정을 예측하는 심층신경망은 90% 정확도를 달성하였다. 감정표현에 문제가 없는 일반인이 개발한 인공지능을 이용해 표정 연습 실험을 수행하여 개발한 인공지능의 성능을 검증하였다.

정신분열병 환자에서의 감정표현불능증과 얼굴정서인식결핍 (Alexithymia and the Recognition of Facial Emotion in Schizophrenic Patients)

  • 노진찬;박성혁;김경희;김소율;신성웅;이건석
    • 생물정신의학
    • /
    • 제18권4호
    • /
    • pp.239-244
    • /
    • 2011
  • Objectives Schizophrenic patients have been shown to be impaired in both emotional self-awareness and recognition of others' facial emotions. Alexithymia refers to the deficits in emotional self-awareness. The relationship between alexithymia and recognition of others' facial emotions needs to be explored to better understand the characteristics of emotional deficits in schizophrenic patients. Methods Thirty control subjects and 31 schizophrenic patients completed the Toronto Alexithymia Scale-20-Korean version (TAS-20K) and facial emotion recognition task. The stimuli in facial emotion recognition task consist of 6 emotions (happiness, sadness, anger, fear, disgust, and neutral). Recognition accuracy was calculated within each emotion category. Correlations between TAS-20K and recognition accuracy were analyzed. Results The schizophrenic patients showed higher TAS-20K scores and lower recognition accuracy compared with the control subjects. The schizophrenic patients did not demonstrate any significant correlations between TAS-20K and recognition accuracy, unlike the control subjects. Conclusions The data suggest that, although schizophrenia may impair both emotional self-awareness and recognition of others' facial emotions, the degrees of deficit can be different between emotional self-awareness and recognition of others' facial emotions. This indicates that the emotional deficits in schizophrenia may assume more complex features.

얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과 (Effects of the facial expression presenting types and facial areas on the emotional recognition)

  • 이정헌;박수진;한광희;김혜리;조경자
    • 감성과학
    • /
    • 제10권1호
    • /
    • pp.113-125
    • /
    • 2007
  • 본 연구에서는 동영상 자극과 정지 영상 자극을 사용하여 얼굴 표정의 영역(얼굴 전체/눈 영역/입 영역)에 따른 정서 상태 전달 효과를 알아보고자 하였다. 동영상 자극은 7초 동안 제시되었으며, 실험 1에서는 12개의 기본 정서에 대한 얼굴 표정 제시 유형과 제시 영역에 따른 정서 인식 효과를, 실험 2에서는 12개의 복합 정서에 대한 얼굴 표정 제시 유형과 제시 영역에 따른 정서 인식 효과를 살펴보았다. 실험 결과, 동영상 조건이 정지 영상 조건보다 더 높은 정서 인식 효과를 보였으며, 입 영역과 비교하였을 때 동영상에서의 눈 영역이 정지 영상 보다 더 큰 효과를 보여 눈의 움직임이 정서 인식에 중요할 것임을 시사하였다. 이는 기본 정서 뿐 아니라 복합 정서에서도 어느 정도 관찰될 수 있는 결과였다. 그럼에도 불구하고 정서의 종류에 따라 동영상의 효과가 달라질 수 있기 때문에 개별 정서별 분석이 필요하며, 또한, 얼굴의 특정 영역에 따라서도 상대적으로 잘 나타나는 정서 특성이 다를 수 있음을 사사해 준다.

  • PDF

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • 제8권2호
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.

표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션 (3D Emotional Avatar Creation and Animation using Facial Expression Recognition)

  • 조태훈;정중필;최수미
    • 한국멀티미디어학회논문지
    • /
    • 제17권9호
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

Emotional Expression of the Virtual Influencer "Luo Tianyi(洛天依)" in Digital'

  • Guangtao Song;Albert Young Choi
    • International Journal of Advanced Culture Technology
    • /
    • 제12권2호
    • /
    • pp.375-385
    • /
    • 2024
  • In the context of contemporary digital media, virtual influencers have become an increasingly important form of socialization and entertainment, in which emotional expression is a key factor in attracting viewers. In this study, we take Luo Tianyi, a Chinese virtual influencer, as an example to explore how emotions are expressed and perceived through facial expressions in different types of videos. Using Paul Ekman's Facial Action Coding System (FACS) and six basic emotion classifications, the study systematically analyzes Luo Tianyi's emotional expressions in three types of videos, namely Music show, Festivals and Brand Cooperation. During the study, Luo Tianyi's facial expressions and emotional expressions were analyzed through rigorous coding and categorization, as well as matching the context of the video content. The results show that Enjoyment is the most frequently expressed emotion by Luo Tianyi, reflecting the centrality of positive emotions in content creation. Meanwhile, the presence of other emotion types reveals the virtual influencer's efforts to create emotionally rich and authentic experiences. The frequency and variety of emotions expressed in different video genres indicate Luo Tianyi's diverse strategies for communicating and connecting with viewers in different contexts. The study provides an empirical basis for understanding and utilizing virtual influencers' emotional expressions, and offers valuable insights for digital media content creators to design emotional expression strategies. Overall, this study is valuable for understanding the complexity of virtual influencer emotional expression and its importance in digital media strategy.

Mood Suggestion Framework Using Emotional Relaxation Matching Based on Emotion Meshes

  • Kim, Jong-Hyun
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권8호
    • /
    • pp.37-43
    • /
    • 2018
  • In this paper, we propose a framework that automatically suggests emotion using emotion analysis method based on facial expression change. We use Microsoft's Emotion API to calculate and analyze emotion values in facial expressions to recognize emotions that change over time. In this step, we use standard deviations based on peak analysis to measure and classify emotional changes. The difference between the classified emotion and the normal emotion is calculated, and the difference is used to recognize the emotion abnormality. We match user's emotions to relatively relaxed emotions using histograms and emotional meshes. As a result, we provide relaxed emotions to users through images. The proposed framework helps users to recognize emotional changes easily and to train their emotions through emotional relaxation.

Facial Emotion Recognition in Older Adults With Cognitive Complaints

  • YongSoo Shim
    • 대한치매학회지
    • /
    • 제22권4호
    • /
    • pp.158-168
    • /
    • 2023
  • Background and Purpose: Facial emotion recognition deficits impact the daily life, particularly of Alzheimer's disease patients. We aimed to assess these deficits in the following three groups: subjective cognitive decline (SCD), mild cognitive impairment (MCI), and mild Alzheimer's dementia (AD). Additionally, we explored the associations between facial emotion recognition and cognitive performance. Methods: We used the Korean version of the Florida Facial Affect Battery (K-FAB) in 72 SCD, 76 MCI, and 76 mild AD subjects. The comparison was conducted using the analysis of covariance (ANCOVA), with adjustments being made for age and sex. The Mini-Mental State Examination (MMSE) was utilized to gauge the overall cognitive status, while the Seoul Neuropsychological Screening Battery (SNSB) was employed to evaluate the performance in the following five cognitive domains: attention, language, visuospatial abilities, memory, and frontal executive functions. Results: The ANCOVA results showed significant differences in K-FAB subtests 3, 4, and 5 (p=0.001, p=0.003, and p=0.004, respectively), especially for anger and fearful emotions. Recognition of 'anger' in the FAB subtest 5 declined from SCD to MCI to mild AD. Correlations were observed with age and education, and after controlling for these factors, MMSE and frontal executive function were associated with FAB tests, particularly in the FAB subtest 5 (r=0.507, p<0.001 and r=-0.288, p=0.026, respectively). Conclusions: Emotion recognition deficits worsened from SCD to MCI to mild AD, especially for negative emotions. Complex tasks, such as matching, selection, and naming, showed greater deficits, with a connection to cognitive impairment, especially frontal executive dysfunction.