• 제목/요약/키워드: Emotion machine

검색결과 175건 처리시간 0.027초

음성신호기반의 감정인식의 특징 벡터 비교 (A Comparison of Effective Feature Vectors for Speech Emotion Recognition)

  • 신보라;이석필
    • 전기학회논문지
    • /
    • 제67권10호
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

모의 지능로봇에서의 음성 감정인식 (Speech Emotion Recognition on a Simulated Intelligent Robot)

  • 장광동;김남;권오욱
    • 대한음성학회지:말소리
    • /
    • 제56호
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF

모의 지능로봇에서 음성신호에 의한 감정인식 (Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot)

  • 장광동;권오욱
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2005년도 추계 학술대회 발표논문집
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

얼굴영상과 음성을 이용한 멀티모달 감정인식 (Multimodal Emotion Recognition using Face Image and Speech)

  • 이현구;김동주
    • 디지털산업정보학회논문지
    • /
    • 제8권1호
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • 제44권3호
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.

Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG

  • Udurume, Miracle;Caliwag, Angela;Lim, Wansu;Kim, Gwigon
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.174-180
    • /
    • 2022
  • Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and 63.1%.

뇌파를 활용한 사용자의 감정 분류 알고리즘 (The Classification Algorithm of Users' Emotion Using Brain-Wave)

  • 이현주;신동일;신동규
    • 한국통신학회논문지
    • /
    • 제39C권2호
    • /
    • pp.122-129
    • /
    • 2014
  • 본 연구에서는 사용자에게서 취득한 뇌파의 감정분류를 시행하였고, SVM(Support Vector Machine)과 K-means 알고리즘으로 분류실험을 하였다. 뇌파 신호는 측정 한 32개의 채널 중에서, 이전 연구에서 감정분류가 뚜렷하게 나타났던 CP6, Cz, FC2, T7, PO4, AF3, CP1, CP2, C3, F3, FC6, C4, Oz, T8, F8의 총 15개의 채널을 사용하였다. 감정유도는 DVD 시청과 IAPS(International Affective Picture System)라는 사진 자극 방법을 사용하였고, 감정분류는 SAM(Self-Assessment Manikin) 방법을 사용하여 사용자의 감정상태를 파악하였다. 취득된 사용자의 뇌파신호는 FIR filter를 사용하여 전처리를 하였고, ICA(Independence Component Analysis)를 사용하여 인공산물(eye-blink)을 제거하였다. 전처리된 데이터를 FFT를 통하여 주파수 분석을 하여 특징추출(feature extraction) 하였다. 마지막으로 분류알고리즘을 사용하여 실험을 하였는데, K-means는 70%의 결과를 도출하였고, SVM은 71.85%의 결과를 도출하여 정확도가 더 우수하였으며, 이전의 SVM을 사용했던 연구결과와 비교분석하였다.

개인의 감성 분석 기반 향 추천 미러 설계 (Design of a Mirror for Fragrance Recommendation based on Personal Emotion Analysis)

  • 김현지;오유수
    • 한국산업정보학회논문지
    • /
    • 제28권4호
    • /
    • pp.11-19
    • /
    • 2023
  • 본 논문에서는 사용자의 감정 분석에 따른 향을 추천하는 스마트 미러 시스템을 제안한다. 본 논문은 자연어 처리 중 임베딩 기법(CounterVectorizer와 TF-IDF 기법), 머신러닝 분류 기법 중 최적의 모델(DecisionTree, SVM, RandomForest, SGD Classifier)을 융합하여 시스템을 구축하고 그 결과를 비교한다. 실험 결과, 가장 높은 성능을 보이는 SVM과 워드 임베딩을 파이프라인 기법으로 감정 분류기 모델에 적용한다. 제안된 시스템은 Flask 웹 프레임워크를 이용하여 웹 서비스를 제공하는 개인감정 분석 기반 향 추천 미러를 구현한다. 본 논문은 Google Speech Cloud API를 이용하여 사용자의 음성을 인식하고 STT(Speech To Text)로 음성 변환된 텍스트 데이터를 사용한다. 제안된 시스템은 날씨, 습도, 위치, 명언, 시간, 일정 관리에 대한 정보를 사용자에게 제공한다.

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • 제24권4E호
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

생체신호 분석을 통한 인간감성의 측정 (Measurement of Human Sensibility by Bio-Signal Analysis)

  • 박준영;박장현;박지형;박동수
    • 대한기계학회:학술대회논문집
    • /
    • 대한기계학회 2003년도 춘계학술대회
    • /
    • pp.935-939
    • /
    • 2003
  • The emotion recognition is one of the most significant interface technologies which make the high level of human-machine communication possible. The central nervous system stimulated by emotional stimuli affects the autonomous nervous system like a heart, blood vessel, endocrine organs, and so on. Therefore bio-signals like HRV, ECG and EEG can reflect one' emotional state. This study investigates the correlation between emotional states and bio-signals to realize the emotion recognition. This study also covers classification of human emotional states, selection of the effective bio-signal and signal processing. The experimental results presented in this paper show possibility of the emotion recognition.

  • PDF