• 제목/요약/키워드: Automatic emotion recognition

검색결과 24건 처리시간 0.021초

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • 제37권6호
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

독일어 감정음성에서 추출한 포먼트의 분석 및 감정인식 시스템과 음성인식 시스템에 대한 음향적 의미 (An Analysis of Formants Extracted from Emotional Speech and Acoustical Implications for the Emotion Recognition System and Speech Recognition System)

  • 이서배
    • 말소리와 음성과학
    • /
    • 제3권1호
    • /
    • pp.45-50
    • /
    • 2011
  • Formant structure of speech associated with five different emotions (anger, fear, happiness, neutral, sadness) was analysed. Acoustic separability of vowels (or emotions) associated with a specific emotion (or vowel) was estimated using F-ratio. According to the results, neutral showed the highest separability of vowels followed by anger, happiness, fear, and sadness in descending order. Vowel /A/ showed the highest separability of emotions followed by /U/, /O/, /I/ and /E/ in descending order. The acoustic results were interpreted and explained in the context of previous articulatory and perceptual studies. Suggestions for the performance improvement of an automatic emotion recognition system and automatic speech recognition system were made.

  • PDF

SOM 이용한 각성수준의 자동인식 (Automatic Recognition in the Level of Arousal using SOM)

  • 정찬순;함준석;고일주
    • 감성과학
    • /
    • 제14권2호
    • /
    • pp.197-206
    • /
    • 2011
  • 본 논문에서는 신경망 SOM학습을 이용하여 피험자의 각성수준을 높은각성과 낮은각성으로 자동인식하는 것을 제안한다. 각성수준의 자동인식 단계는 세 단계로 구성된다 첫 번째는 ECG 측정 및 분석단계로 슈팅게임을 플레이하는 피험자를 ECG로 측정하고, SOM 학습을 하기 위해 특징을 추출한다. 두 번째는 SOM 학습 단계로 특징이 추출된 입력벡터들을 학습한다. 마지막으로 각성인식 단계는 SOM 학습이 완료된 후에 새로운 입력벡터가 들어왔을 때, 피험자의 각성수준을 인식한다. 실험결과는 각성수준의 SOM 학습결과와 새로운 입력벡터가 들어왔을 때 각성수준의 인식결과, 그리고 각성수준을 수치와 그래프로 보여준다. 마지막으로 SOM의 평가는 기존연구의 감성평가 결과와 SOM의 자동인식 결과를 순차적으로 비교하여 평균 86%로 분석되었다. 본 연구를 통해서 SOM을 이용하여 피험자마다 다른 각성수준을 자동인식 할 수 있었다.

  • PDF

인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법 (Automatic Human Emotion Recognition from Speech and Face Display - A New Approach)

  • 딩�E령;이영구;이승룡
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2011년도 한국컴퓨터종합학술대회논문집 Vol.38 No.1(B)
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

감성인식과 핵심어인식 기술을 이용한 고객센터 자동 모니터링 시스템에 대한 연구 (A Study on the Automatic Monitoring System for the Contact Center Using Emotion Recognition and Keyword Spotting Method)

  • 윤원중;김태홍;박규식
    • 인터넷정보학회논문지
    • /
    • 제13권3호
    • /
    • pp.107-114
    • /
    • 2012
  • 본 논문에서는 고객의 불만관리 및 상담원의 상담품질 관리를 위한 고객센터 자동 모니터링 시스템에 대한 연구를 진행하였다. 제안된 시스템에서는 평상/화남의 2가지 감성에 대한 음성 감성인식 기술과 핵심어인식 기술을 사용하여 상담내역에 대한 보다 정확한 모니터링이 가능하고, 욕설, 성희롱 등의 언어폭력을 일삼는 고객에 대한 전문상담 및 관리가 가능하다. 서로 다른 환경에서 구축된 이종 음성 DB를 이용하여 불특정 고객들의 질의 음성에 안정적으로 동작할 수 있는 알고리즘을 개발하였으며, 실제 고객센터 상담내역 데이터를 이용하여 성능을 검증하였다.

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

자동 감성 인식을 위한 비교사-교사 분류기의 복합 설계 (Design of Hybrid Unsupervised-Supervised Classifier for Automatic Emotion Recognition)

  • 이지은;유선국
    • 전기학회논문지
    • /
    • 제63권9호
    • /
    • pp.1294-1299
    • /
    • 2014
  • The emotion is deeply affected by human behavior and cognitive process, so it is important to do research about the emotion. However, the emotion is ambiguous to clarify because of different ways of life pattern depending on each individual characteristics. To solve this problem, we use not only physiological signal for objective analysis but also hybrid unsupervised-supervised learning classifier for automatic emotion detection. The hybrid emotion classifier is composed of K-means, genetic algorithm and support vector machine. We acquire four different kinds of physiological signal including electroencephalography(EEG), electrocardiography(ECG), galvanic skin response(GSR) and skin temperature(SKT) as well as we use 15 features extracted to be used for hybrid emotion classifier. As a result, hybrid emotion classifier(80.6%) shows better performance than SVM(31.3%).

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • 제27권1호
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

딥네트워크 기반 음성 감정인식 기술 동향 (Speech Emotion Recognition Based on Deep Networks: A Review)

  • 무스타킴;권순일
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2021년도 춘계학술발표대회
    • /
    • pp.331-334
    • /
    • 2021
  • In the latest eras, there has been a significant amount of development and research is done on the usage of Deep Learning (DL) for speech emotion recognition (SER) based on Convolutional Neural Network (CNN). These techniques are usually focused on utilizing CNN for an application associated with emotion recognition. Moreover, numerous mechanisms are deliberated that is based on deep learning, meanwhile, it's important in the SER-based human-computer interaction (HCI) applications. Associating with other methods, the methods created by DL are presenting quite motivating results in many fields including automatic speech recognition. Hence, it appeals to a lot of studies and investigations. In this article, a review with evaluations is illustrated on the improvements that happened in the SER domain though likewise arguing the existing studies that are existence SER based on DL and CNN methods.

음성신호를 이용한 감정인식 (An Emotion Recognition Technique using Speech Signals)

  • 정병욱;천성표;김연태;김성신
    • 한국지능시스템학회논문지
    • /
    • 제18권4호
    • /
    • pp.494-500
    • /
    • 2008
  • 휴먼인터페이스 기술의 발달에서 인간과 기계의 상호작용은 중요한 부분이다. 감정인식에 대한 연구는 이러한 상호작용에 도움을 준다. 본 연구는 개인화된 음성신호에 대하여 감정인식 알고리즘을 제안하였다. 감정인식을 위하여 PLP 분석을 이용하여 음성신호의 특징으로 사용하였다. 처음에 PLP 분석은 음성인식에서 음성신호의 화자 종속적인 성분을 제거하기 위하여 사용되었으나 이후 화자인식을 위한 연구에서 PLP 분석이 화자의 특징 추출을 위해 효과적임을 설명하고 있다. 그래서 본 논문은 PLP 분석으로 만들어진 개인화된 감정 패턴을 이용하여 쉽게 실시간으로 음성신호로부터 감정을 평가하는 알고리즘을 제안하였다. 그 결과 최대 90%이상의 인식률과 평균 75%의 인식률을 보였다. 이 시스템은 간단하지만 효율적이다.