• Title/Summary/Keyword: Emotion Detection

Search Result 117, Processing Time 0.028 seconds

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Identification and Detection of Emotion Using Probabilistic Output SVM (확률출력 SVM을 이용한 감정식별 및 감정검출)

  • Cho, Hoon-Young;Jung, Gue-Jun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.8
    • /
    • pp.375-382
    • /
    • 2006
  • This paper is about how to identify emotional information and how to detect a specific emotion from speech signals. For emotion identification and detection task. we use long-term acoustic feature parameters and select the optimal Parameters using the feature selection technique based on F-score. We transform the conventional SVM into probabilistic output SVM for our emotion identification and detection system. In this paper we propose three approximation methods for log-likelihoods in a hypothesis test and compare the performance of those three methods. Experimental results using the SUSAS database showed the effectiveness of both feature selection and Probabilistic output SVM in the emotion identification task. The proposed methods could detect anger emotion with 91.3% correctness.

Emotion Detecting Method Based on Various Attributes of Human Voice

  • MIYAJI Yutaka;TOMIYAMA Ken
    • Science of Emotion and Sensibility
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2005
  • This paper reports several emotion detecting methods based on various attributes of human voice. These methods have been developed at our Engineering Systems Laboratory. It is noted that, in all of the proposed methods, only prosodic information in voice is used for emotion recognition and semantic information in voice is not used. Different types of neural networks(NNs) are used for detection depending on the type of voice parameters. Earlier approaches separately used linear prediction coefficients(LPCs) and time series data of pitch but they were combined in later studies. The proposed methods are explained first and then evaluation experiments of individual methods and their performances in emotion detection are presented and compared.

  • PDF

Jointly Image Topic and Emotion Detection using Multi-Modal Hierarchical Latent Dirichlet Allocation

  • Ding, Wanying;Zhu, Junhuan;Guo, Lifan;Hu, Xiaohua;Luo, Jiebo;Wang, Haohong
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.55-67
    • /
    • 2014
  • Image topic and emotion analysis is an important component of online image retrieval, which nowadays has become very popular in the widely growing social media community. However, due to the gaps between images and texts, there is very limited work in literature to detect one image's Topics and Emotions in a unified framework, although topics and emotions are two levels of semantics that often work together to comprehensively describe one image. In this work, a unified model, Joint Topic/Emotion Multi-Modal Hierarchical Latent Dirichlet Allocation (JTE-MMHLDA) model, which extends previous LDA, mmLDA, and JST model to capture topic and emotion information at the same time from heterogeneous data, is proposed. Specifically, a two level graphical structured model is built to realize sharing topics and emotions among the whole document collection. The experimental results on a Flickr dataset indicate that the proposed model efficiently discovers images' topics and emotions, and significantly outperform the text-only system by 4.4%, vision-only system by 18.1% in topic detection, and outperforms the text-only system by 7.1%, vision-only system by 39.7% in emotion detection.

  • PDF

Sound-based Emotion Estimation and Growing HRI System for an Edutainment Robot (에듀테인먼트 로봇을 위한 소리기반 사용자 감성추정과 성장형 감성 HRI시스템)

  • Kim, Jong-Cheol;Park, Kui-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.1
    • /
    • pp.7-13
    • /
    • 2010
  • This paper presents the sound-based emotion estimation method and the growing HRI (human-robot interaction) system for a Mon-E robot. The method of emotion estimation uses the musical element based on the law of harmony and counterpoint. The emotion is estimated from sound using the information of musical elements which include chord, tempo, volume, harmonic and compass. In this paper, the estimated emotions display the standard 12 emotions including Eckman's 6 emotions (anger, disgust, fear, happiness, sadness, surprise) and the opposite 6 emotions (calmness, love, confidence, unhappiness, gladness, comfortableness) of those. The growing HRI system analyzes sensing information, estimated emotion and service log in an edutainment robot. So, it commands the behavior of the robot. The growing HRI system consists of the emotion client and the emotion server. The emotion client estimates the emotion from sound. This client not only transmits the estimated emotion and sensing information to the emotion server but also delivers response coming from the emotion server to the main program of the robot. The emotion server not only updates the rule table of HRI using information transmitted from the emotion client and but also transmits the response of the HRI to the emotion client. The proposed system was applied to a Mon-E robot and can supply friendly HRI service to users.

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

A study on the enhancement of emotion recognition through facial expression detection in user's tendency (사용자의 성향 기반의 얼굴 표정을 통한 감정 인식률 향상을 위한 연구)

  • Lee, Jong-Sik;Shin, Dong-Hee
    • Science of Emotion and Sensibility
    • /
    • v.17 no.1
    • /
    • pp.53-62
    • /
    • 2014
  • Despite the huge potential of the practical application of emotion recognition technologies, the enhancement of the technologies still remains a challenge mainly due to the difficulty of recognizing emotion. Although not perfect, human emotions can be recognized through human images and sounds. Emotion recognition technologies have been researched by extensive studies that include image-based recognition studies, sound-based studies, and both image and sound-based studies. Studies on emotion recognition through facial expression detection are especially effective as emotions are primarily expressed in human face. However, differences in user environment and their familiarity with the technologies may cause significant disparities and errors. In order to enhance the accuracy of real-time emotion recognition, it is crucial to note a mechanism of understanding and analyzing users' personality traits that contribute to the improvement of emotion recognition. This study focuses on analyzing users' personality traits and its application in the emotion recognition system to reduce errors in emotion recognition through facial expression detection and improve the accuracy of the results. In particular, the study offers a practical solution to users with subtle facial expressions or low degree of emotion expression by providing an enhanced emotion recognition function.

A Review of Public Datasets for Keystroke-based Behavior Analysis

  • Kolmogortseva Karina;Soo-Hyung Kim;Aera Kim
    • Smart Media Journal
    • /
    • v.13 no.7
    • /
    • pp.18-26
    • /
    • 2024
  • One of the newest trends in AI is emotion recognition utilizing keystroke dynamics, which leverages biometric data to identify users and assess emotional states. This work offers a comparison of four datasets that are frequently used to research keystroke dynamics: BB-MAS, Buffalo, Clarkson II, and CMU. The datasets contain different types of data, both behavioral and physiological biometric data that was gathered in a range of environments, from controlled labs to real work environments. Considering the benefits and drawbacks of each dataset, paying particular attention to how well it can be used for tasks like emotion recognition and behavioral analysis. Our findings demonstrate how user attributes, task circumstances, and ambient elements affect typing behavior. This comparative analysis aims to guide future research and development of applications for emotion detection and biometrics, emphasizing the importance of collecting diverse data and the possibility of integrating keystroke dynamics with other biometric measurements.