• Title/Summary/Keyword: artificial emotion

Search Result 159, Processing Time 0.029 seconds

Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences (이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.

LSTM Hyperparameter Optimization for an EEG-Based Efficient Emotion Classification in BCI (BCI에서 EEG 기반 효율적인 감정 분류를 위한 LSTM 하이퍼파라미터 최적화)

  • Aliyu, Ibrahim;Mahmood, Raja Majid;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1171-1180
    • /
    • 2019
  • Emotion is a psycho-physiological process that plays an important role in human interactions. Affective computing is centered on the development of human-aware artificial intelligence that can understand and regulate emotions. This field of study is also critical as mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction are associated with emotion. Despite the efforts in emotions recognition and emotion detection from nonstationary, detecting emotions from abnormal EEG signals requires sophisticated learning algorithms because they require a high level of abstraction. In this paper, we investigated LSTM hyperparameters for an optimal emotion EEG classification. Results of several experiments are hereby presented. From the results, optimal LSTM hyperparameter configuration was achieved.

Effects of LED on Emotion-Like Feedback of a Single-Eyed Spherical Robot

  • Onchi, Eiji;Cornet, Natanya;Lee, SeungHee
    • Science of Emotion and Sensibility
    • /
    • v.24 no.3
    • /
    • pp.115-124
    • /
    • 2021
  • Non-verbal communication is important in human interaction. It provides a layer of information that complements the message being transmitted. This type of information is not limited to human speakers. In human-robot communication, increasing the animacy of the robotic agent-by using non-verbal cues-can aid the expression of abstract concepts such as emotions. Considering the physical limitations of artificial agents, robots can use light and movement to express equivalent emotional feedback. This study analyzes the effects of LED and motion animation of a spherical robot on the emotion being expressed by the robot. A within-subjects experiment was conducted at the University of Tsukuba where participants were asked to rate 28 video samples of a robot interacting with a person. The robot displayed different motions with and without light animations. The results indicated that adding LED animations changes the emotional impression of the robot for valence, arousal, and dominance dimensions. Furthermore, people associated various situations according to the robot's behavior. These stimuli can be used to modulate the intensity of the emotion being expressed and enhance the interaction experience. This paper facilitates the possibility of designing more affective robots in the future, using simple feedback.

Unraveling Emotions in Speech: Deep Neural Networks for Emotion Recognition (음성을 통한 감정 해석: 감정 인식을 위한 딥 뉴럴 네트워크 예비 연구)

  • Edward Dwijayanto Cahyadi;Mi-Hwa Song
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.411-412
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing SER, we can get numerous benefits. By using a convolutional neural network and Long Short Term Memory (LSTM ) method as a part of Artificial intelligence, the SER system can be built.

Deep Reinforcement Learning-Based Cooperative Robot Using Facial Feedback (표정 피드백을 이용한 딥강화학습 기반 협력로봇 개발)

  • Jeon, Haein;Kang, Jeonghun;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.264-272
    • /
    • 2022
  • Human-robot cooperative tasks are increasingly required in our daily life with the development of robotics and artificial intelligence technology. Interactive reinforcement learning strategies suggest that robots learn task by receiving feedback from an experienced human trainer during a training process. However, most of the previous studies on Interactive reinforcement learning have required an extra feedback input device such as a mouse or keyboard in addition to robot itself, and the scenario where a robot can interactively learn a task with human have been also limited to virtual environment. To solve these limitations, this paper studies training strategies of robot that learn table balancing tasks interactively using deep reinforcement learning with human's facial expression feedback. In the proposed system, the robot learns a cooperative table balancing task using Deep Q-Network (DQN), which is a deep reinforcement learning technique, with human facial emotion expression feedback. As a result of the experiment, the proposed system achieved a high optimal policy convergence rate of up to 83.3% in training and successful assumption rate of up to 91.6% in testing, showing improved performance compared to the model without human facial expression feedback.

Analysis of Users' Emotions on Lighting Effect of Artificial Intelligence Devices (인공지능 디바이스의 조명효과에 대한 사용자의 감정 평가 분석)

  • Hyeon, Yuna;Pan, Young-hwan;Yoo, Hoon-Sik
    • Science of Emotion and Sensibility
    • /
    • v.22 no.3
    • /
    • pp.35-46
    • /
    • 2019
  • Artificial intelligence (AI) technology has been evolving to recognize and learn the languages, voice tones, and facial expressions of users so that they can respond to users' emotions in various contexts. Many AI-based services of particular importance in communications with users provide emotional interaction. However, research on nonverbal interaction as a means of expressing emotion in the AI system is still insufficient. We studied the effect of lighting on users' emotional interaction with an AI device, focusing on color and flickering motion. The AI device used in this study expresses emotions with six colors of light (red, yellow, green, blue, purple, and white) and with a three-level flickering effect (high, middle, and low velocity). We studied the responses of 50 men and women in their 20s and 30s to the emotions expressed by the light colors and flickering effects of the AI device. We found that each light color represented an emotion that was largely similar to the user's emotional image shown in a previous color-sensibility study. The rate of flickering of the lights produced changes in emotional arousal and balance. The change in arousal patterns produced similar intensities of all colors. On the other hand, changes in balance patterns were somewhat related to the emotional image in the previous color-sensibility study, but the colors were different. As AI systems and devices are becoming more diverse, our findings are expected to contribute to designing the users emotional with AI devices through lighting.

Development of Elementary School AI Education Contents using Entry Text Model Learning (엔트리 텍스트 모델 학습을 활용한 초등 인공지능 교육 내용 개발)

  • Kim, Byungjo;Kim, Hyenbae
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.1
    • /
    • pp.65-73
    • /
    • 2022
  • In this study, by using Entry text model learning, educational contents for artificial intelligence education of elementary school students are developed and applied to actual classes. Based on the elementary and secondary artificial intelligence content table, the achievement standards of practical software education and artificial intelligence education will be reconstructed.. Among text, images, and sounds capable of machine learning, "production of emotion recognition programs using text model learning" will be selected as the educational content, which can be easily understood while reducing data preparation time for elementary school students. Entry artificial intelligence is selected as an education platform to develop artificial intelligence education contents that create emotion recognition programs using text model learning and apply them to actual elementary school classes. Based on the contents of this study, As a result of class application, students showed positive responses and interest in the entry AI class. it is suggested that quantitative research on the effectiveness of classes for elementary school students is necessary as a follow-up study.

KE-T5-Based Text Emotion Classification in Korean Conversations (KE-T5 기반 한국어 대화 문장 감정 분류)

  • Lim, Yeongbeom;Kim, San;Jang, Jin Yea;Shin, Saim;Jung, Minyoung
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.496-497
    • /
    • 2021
  • 감정 분류는 사람의 사고방식이나 행동양식을 구분하기 위한 중요한 열쇠로, 지난 수십 년간 감정 분석과 관련된 다양한 연구가 진행되었다. 감정 분류의 품질과 정확도를 높이기 위한 방법 중 하나로 단일 레이블링 대신 다중 레이블링된 데이터 세트를 감정 분석에 활용하는 연구가 제안되었고, 본 논문에서는 T5 모델을 한국어와 영어 코퍼스로 학습한 KE-T5 모델을 기반으로 한국어 발화 데이터를 단일 레이블링한 경우와 다중 레이블링한 경우의 감정 분류 성능을 비교한 결과 다중 레이블 데이터 세트가 단일 레이블 데이터 세트보다 23.3% 더 높은 정확도를 보임을 확인했다.

  • PDF

Convolutional Neural Network Model Using Data Augmentation for Emotion AI-based Recommendation Systems

  • Ho-yeon Park;Kyoung-jae Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.57-66
    • /
    • 2023
  • In this study, we propose a novel research framework for the recommendation system that can estimate the user's emotional state and reflect it in the recommendation process by applying deep learning techniques and emotion AI (artificial intelligence). To this end, we build an emotion classification model that classifies each of the seven emotions of angry, disgust, fear, happy, sad, surprise, and neutral, respectively, and propose a model that can reflect this result in the recommendation process. However, in the general emotion classification data, the difference in distribution ratio between each label is large, so it may be difficult to expect generalized classification results. In this study, since the number of emotion data such as disgust in emotion image data is often insufficient, correction is made through augmentation. Lastly, we propose a method to reflect the emotion prediction model based on data through image augmentation in the recommendation systems.

A Study on Emotion Classification using 4-Channel EEG Signals (4채널 뇌파 신호를 이용한 감정 분류에 관한 연구)

  • Kim, Dong-Jun;Lee, Hyun-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.23-28
    • /
    • 2009
  • This study describes an emotion classification method using two different feature parameters of four-channel EEG signals. One of the parameters is linear prediction coefficients based on AR modelling. Another one is cross-correlation coefficients on frequencies of ${\theta}$, ${\alpha}$, ${\beta}$ bands of FFT spectra. Using the linear predictor coefficients and the cross-correlation coefficients of frequencies, the emotion classification test for four emotions, such as anger, sad, joy, and relaxation is performed with an artificial neural network. The results of the two parameters showed that the linear prediction coefficients have produced the better results for emotion classification than the cross-correlation coefficients of FFT spectra.

  • PDF