• 제목/요약/키워드: Emotion recognition

검색결과 641건 처리시간 0.026초

IoT를 위한 음성신호 기반의 톤, 템포 특징벡터를 이용한 감정인식 (Emotion Recognition Using Tone and Tempo Based on Voice for IoT)

  • 변성우;이석필
    • 전기학회논문지
    • /
    • 제65권1호
    • /
    • pp.116-121
    • /
    • 2016
  • In Internet of things (IoT) area, researches on recognizing human emotion are increasing recently. Generally, multi-modal features like facial images, bio-signals and voice signals are used for the emotion recognition. Among the multi-modal features, voice signals are the most convenient for acquisition. This paper proposes an emotion recognition method using tone and tempo based on voice. For this, we make voice databases from broadcasting media contents. Emotion recognition tests are carried out by extracted tone and tempo features from the voice databases. The result shows noticeable improvement of accuracy in comparison to conventional methods using only pitch.

혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식 (Emotion Recognition of Facial Expression using the Hybrid Feature Extraction)

  • 변광섭;박창현;심귀보
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

음성신호기반의 감정인식의 특징 벡터 비교 (A Comparison of Effective Feature Vectors for Speech Emotion Recognition)

  • 신보라;이석필
    • 전기학회논문지
    • /
    • 제67권10호
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

감정 상호작용 로봇을 위한 신뢰도 평가를 이용한 화자독립 감정인식 (Speech Emotion Recognition Using Confidence Level for Emotional Interaction Robot)

  • 김은호
    • 한국지능시스템학회논문지
    • /
    • 제19권6호
    • /
    • pp.755-759
    • /
    • 2009
  • 인간의 감정을 인식하는 기술은 인간-로봇 상호작용 분야의 중요한 연구주제 중 하나이다. 특히, 화자독립 감정인식은 음성감정인식의 상용화를 위해 꼭 필요한 중요한 이슈이다. 일반적으로, 화자독립 감정인식 시스템은 화자종속 시스템과 비교하여 감정특징 값들의 화자 그리고 성별에 따른 변화로 인하여 낮은 인식률을 보인다. 따라서 본 논문에서는 신뢰도 평가방법을 이용한 감정인식결과의 거절 방법을 사용하여 화자독립 감정인식 시스템을 일관되고 정확하게 구현할 수 있는 방법을 제시한다. 또한, 제안된 방법과 기존 방법의 비교를 통하여 제안된 방법의 효율성 및 가능성을 검증한다.

Hybrid-Feature Extraction for the Facial Emotion Recognition

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo;Jeong, In-Cheol;Ham, Ho-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1281-1285
    • /
    • 2004
  • There are numerous emotions in the human world. Human expresses and recognizes their emotion using various channels. The example is an eye, nose and mouse. Particularly, in the emotion recognition from facial expression they can perform the very flexible and robust emotion recognition because of utilization of various channels. Hybrid-feature extraction algorithm is based on this human process. It uses the geometrical feature extraction and the color distributed histogram. And then, through the independently parallel learning of the neural-network, input emotion is classified. Also, for the natural classification of the emotion, advancing two-dimensional emotion space is introduced and used in this paper. Advancing twodimensional emotion space performs a flexible and smooth classification of emotion.

  • PDF

이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법 (Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences)

  • 김경태;최재영
    • 한국멀티미디어학회논문지
    • /
    • 제20권8호
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.

음성의 피치 파라메터를 사용한 감정 인식 (Emotion Recognition using Pitch Parameters of Speech)

  • 이규현;김원구
    • 한국지능시스템학회논문지
    • /
    • 제25권3호
    • /
    • pp.272-278
    • /
    • 2015
  • 본 논문에서는 음성신호 피치 정보를 이용한 감정 인식 시스템 개발을 목표로 피치 정보로부터 다양한 파라메터 추출방법을 연구하였다. 이를 위하여 다양한 감정이 포함된 한국어 음성 데이터베이스를 이용하여 피치의 통계적인 정보와 수치해석 기법을 사용한 피치 파라메터를 생성하였다. 이러한 파라메터들은 GMM(Gaussian Mixture Model) 기반의 감정 인식 시스템을 구현하여 각 파라메터의 성능을 비교되었다. 또한 순차특징선택 방법을 사용하여 최고의 감정 인식 성능을 나타내는 피치 파라메터들을 선정하였다. 4개의 감정을 구별하는 실험 결과에서 총 56개의 파라메터중에서 15개를 조합하였을 때 63.5%의 인식 성능을 나타내었다. 또한 감정 검출 여부를 나타내는 실험에서는 14개의 파라메터를 조합하였을 때 80.3%의 인식 성능을 나타내었다.

안정적인 실시간 얼굴 특징점 추적과 감정인식 응용 (Robust Real-time Tracking of Facial Features with Application to Emotion Recognition)

  • 안병태;김응희;손진훈;권인소
    • 로봇학회논문지
    • /
    • 제8권4호
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권2호
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Multimodal Parametric Fusion for Emotion Recognition

  • Kim, Jonghwa
    • International journal of advanced smart convergence
    • /
    • 제9권1호
    • /
    • pp.193-201
    • /
    • 2020
  • The main objective of this study is to investigate the impact of additional modalities on the performance of emotion recognition using speech, facial expression and physiological measurements. In order to compare different approaches, we designed a feature-based recognition system as a benchmark which carries out linear supervised classification followed by the leave-one-out cross-validation. For the classification of four emotions, it turned out that bimodal fusion in our experiment improves recognition accuracy of unimodal approach, while the performance of trimodal fusion varies strongly depending on the individual. Furthermore, we experienced extremely high disparity between single class recognition rates, while we could not observe a best performing single modality in our experiment. Based on these observations, we developed a novel fusion method, called parametric decision fusion (PDF), which lies in building emotion-specific classifiers and exploits advantage of a parametrized decision process. By using the PDF scheme we achieved 16% improvement in accuracy of subject-dependent recognition and 10% for subject-independent recognition compared to the best unimodal results.