• Title/Summary/Keyword: artificial emotion

Search Result 162, Processing Time 0.023 seconds

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

A Design of Artificial Emotion Model (인공 감정 모델의 설계)

  • Lee, In-Geun;Seo, Seok-Tae;Jeong, Hye-Cheon;Gwon, Sun-Hak
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.58-62
    • /
    • 2007
  • 인간이 생성한 음성, 표정 영상, 문장 등으로부터 인간의 감정 상태를 인식하는 연구와 함께, 인간의 감정을 모방하여 다양한 외부 자극으로 감정을 생성하는 인공 감정(Artificial Emotion)에 관한 연구가 이루어지고 있다. 그러나 기존의 인공 감정 연구는 외부 감정 자극에 대한 감정 변화 상태를 선형적, 지수적으로 변화시킴으로써 감정 상태가 급격하게 변하는 형태를 보인다. 본 논문에서는 외부 감정 자극의 강도와 빈도뿐만 아니라 자극의 반복 주기를 감정 상태에 반영하고, 시간에 따른 감정의 변화를 Sigmoid 곡선 형태로 표현하는 감정 생성 모델을 제안한다. 그리고 기존의 감정 자극에 대한 회상(recollection)을 통해 외부 감정 자극이 없는 상황에서도 감정을 생성할 수 있는 인공 감정 시스템을 제안한다.

  • PDF

Implementation of Intelligent Virtual Character Based on Reinforcement Learning and Emotion Model (강화학습과 감정모델 기반의 지능적인 가상 캐릭터의 구현)

  • Woo Jong-Ha;Park Jung-Eun;Oh Kyung-Whan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.259-265
    • /
    • 2006
  • Learning and emotions are very important parts to implement intelligent robots. In this paper, we implement intelligent virtual character based on reinforcement learning which interacts with user and have internal emotion model. Virtual character acts autonomously in 3D virtual environment by internal state. And user can learn virtual character specific behaviors by repeated directions. Mouse gesture is used to perceive such directions based on artificial neural network. Emotion-Mood-Personality model is proposed to express emotions. And we examine the change of emotion and learning behaviors when virtual character interact with user.

Adjusting Personality Types to Character with Affective Features in Artificial Emotion (캐릭터의 성격 유형별로 정서적 특징을 반영한 인공감정)

  • Yeo, Ji-Hye;Ham, Jun-Seok;Jo, Yu-Young;Lee, Kyoung-Mi;Ko, Il-Ju
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2011.01a
    • /
    • pp.133-136
    • /
    • 2011
  • 본 논문은 선행 연구된 인공감정 모델에 성격 유형별로 정서적 특징을 반영한 캐릭터의 인공감정을 설계하여 적용하는 것을 목적으로 한다. 캐릭터의 인공감정은 성격 유형별로 정서적 특징을 생성에서 소멸까지의 시간과 크기로 표현할 수 있다. 따라서 정서의 유지시간, 정서적 경험을 한번 했을 때 느낄 수 있는 최대 크기와 정서를 표현하는 시점을 두 단계로 나누어 정서적 특성을 MBTI 성격 유형에 따라 적용한다. 이렇게 설계된 인공감정은 실제 캐릭터의 적용해보고 성격 유형에 따라 감정 표현과 변화를 분석한다.

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

A Review of Public Datasets for Keystroke-based Behavior Analysis

  • Kolmogortseva Karina;Soo-Hyung Kim;Aera Kim
    • Smart Media Journal
    • /
    • v.13 no.7
    • /
    • pp.18-26
    • /
    • 2024
  • One of the newest trends in AI is emotion recognition utilizing keystroke dynamics, which leverages biometric data to identify users and assess emotional states. This work offers a comparison of four datasets that are frequently used to research keystroke dynamics: BB-MAS, Buffalo, Clarkson II, and CMU. The datasets contain different types of data, both behavioral and physiological biometric data that was gathered in a range of environments, from controlled labs to real work environments. Considering the benefits and drawbacks of each dataset, paying particular attention to how well it can be used for tasks like emotion recognition and behavioral analysis. Our findings demonstrate how user attributes, task circumstances, and ambient elements affect typing behavior. This comparative analysis aims to guide future research and development of applications for emotion detection and biometrics, emphasizing the importance of collecting diverse data and the possibility of integrating keystroke dynamics with other biometric measurements.

A Study on Visual Emotion Classification using Balanced Data Augmentation (균형 잡힌 데이터 증강 기반 영상 감정 분류에 관한 연구)

  • Jeong, Chi Yoon;Kim, Mooseop
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.880-889
    • /
    • 2021
  • In everyday life, recognizing people's emotions from their frames is essential and is a popular research domain in the area of computer vision. Visual emotion has a severe class imbalance in which most of the data are distributed in specific categories. The existing methods do not consider class imbalance and used accuracy as the performance metric, which is not suitable for evaluating the performance of the imbalanced dataset. Therefore, we proposed a method for recognizing visual emotion using balanced data augmentation to address the class imbalance. The proposed method generates a balanced dataset by adopting the random over-sampling and image transformation methods. Also, the proposed method uses the Focal loss as a loss function, which can mitigate the class imbalance by down weighting the well-classified samples. EfficientNet, which is the state-of-the-art method for image classification is used to recognize visual emotion. We compare the performance of the proposed method with that of conventional methods by using a public dataset. The experimental results show that the proposed method increases the F1 score by 40% compared with the method without data augmentation, mitigating class imbalance without loss of classification accuracy.

Practical BioSignal analysis for Nausea detection in VR environment (가상현실환경에서 멀미 측정을 위한 생리신호 분석)

  • Park, M.J.;Kim, H.T.;Park, K.S.
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2002.11a
    • /
    • pp.267-268
    • /
    • 2002
  • We developed nausea, caused by disorder of autonomic nervous system, detection system using bio-signal analysis and artificial neural network in virtual reality enironment. We used 16 bio-signals, 9 EEGs, EOG, ECG, SKT, PPG, GSR, RSP, EGC, which has own analysis methods. We estimated nausea level by artificial neural network.

  • PDF

Developing and Adopting an Artificial Emotion by Technological Approaching Based on Psychological Emotion Model (심리학 기반 감정 모델의 공학적 접근에 의한 인공감정의 제안과 적용)

  • Ham, Jun-Seok;Ryeo, Ji-Hye;Ko, Il-Ju
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2008.06a
    • /
    • pp.331-336
    • /
    • 2008
  • 같은 상황이라도 사람에 따라 느끼는 감정은 다르다. 따라서 감정을 일반화하여 현재의 감정 상태를 정량적으로 표현하는데 는 한계가 있다. 본 논문은 현재의 감정 상태를 나타내기 위해, 인간의 감정을 모델링한 심리학의 감정 모델을 공학적으로 접근하여 심리학기반 공학적 인공감정을 제안한다. 제안된 인공감정은 심리학을 기반으로 감정발생의 인과관계, 성격에 따른 감정의 차이, 시간에 따른 감정의 차이, 연속된 감정자극에 따른 감정의 차이, 감정간의 상호관계에 따른 감정의 차이를 반영하여 구성했다. 현재의 감정 상태를 위치로 나타내기 위해서 감정장을 제안했고, 감정장 상의 위치와 위치에 따른 색깔로 현재의 감정 상태를 표현했다. 감정상태의 변화를 제안된 인공감정을 통해 시각화해보기 위해 셰익스피어의 '햄릿'에서 극중 등장인물인 햄릿의 감정변화를 제안된 인공감정을 통해 시각화 해 보였다.

  • PDF

Happy Applicants Achieve More: Expressed Positive Emotions Captured Using an AI Interview Predict Performances

  • Shin, Ji-eun;Lee, Hyeonju
    • Science of Emotion and Sensibility
    • /
    • v.24 no.2
    • /
    • pp.75-80
    • /
    • 2021
  • Do happy applicants achieve more? Although it is well established that happiness predicts desirable work-related outcomes, previous findings were primarily obtained in social settings. In this study, we extended the scope of the "happiness premium" effect to the artificial intelligence (AI) context. Specifically, we examined whether an applicant's happiness signal captured using an AI system effectively predicts his/her objective performance. Data from 3,609 job applicants showed that verbally expressed happiness (frequency of positive words) during an AI interview predicts cognitive task scores, and this tendency was more pronounced among women than men. However, facially expressed happiness (frequency of smiling) recorded using AI could not predict the performance. Thus, when AI is involved in a hiring process, verbal rather than the facial cues of happiness provide a more valid marker for applicants' hiring chances.