• 제목/요약/키워드: Emotion recognition system

검색결과 220건 처리시간 0.023초

강인한 음성 인식 시스템을 사용한 감정 인식 (Emotion Recognition using Robust Speech Recognition System)

  • 김원구
    • 한국지능시스템학회논문지
    • /
    • 제18권5호
    • /
    • pp.586-591
    • /
    • 2008
  • 본 논문은 음성을 사용한 인간의 감정 인식 시스템의 성능을 향상시키기 위하여 감정 변화에 강인한 음성 인식 시스템과 결합된 감정 인식 시스템에 관하여 연구하였다. 이를 위하여 우선 다양한 감정이 포함된 음성 데이터베이스를 사용하여 감정 변화가 음성 인식 시스템의 성능에 미치는 영향에 관한 연구와 감정 변화의 영향을 적게 받는 음성 인식 시스템을 구현하였다. 감정 인식은 음성 인식의 결과에 따라 입력 문장에 대한 각각의 감정 모델을 비교하여 입력 음성에 대한 최종감정 인식을 수행한다. 실험 결과에서 강인한 음성 인식 시스템은 음성 파라메터로 RASTA 멜 켑스트럼과 델타 켑스트럼을 사용하고 신호편의 제거 방법으로 CMS를 사용한 HMM 기반의 화자독립 단어 인식기를 사용하였다. 이러한 음성 인식기와 결합된 감정 인식을 수행한 결과 감정 인식기만을 사용한 경우보다 좋은 성능을 나타내었다.

특징 선택과 융합 방법을 이용한 음성 감정 인식 (Speech Emotion Recognition using Feature Selection and Fusion Method)

  • 김원구
    • 전기학회논문지
    • /
    • 제66권8호
    • /
    • pp.1265-1271
    • /
    • 2017
  • In this paper, the speech parameter fusion method is studied to improve the performance of the conventional emotion recognition system. For this purpose, the combination of the parameters that show the best performance by combining the cepstrum parameters and the various pitch parameters used in the conventional emotion recognition system are selected. Various pitch parameters were generated using numerical and statistical methods using pitch of speech. Performance evaluation was performed on the emotion recognition system using Gaussian mixture model(GMM) to select the pitch parameters that showed the best performance in combination with cepstrum parameters. As a parameter selection method, sequential feature selection method was used. In the experiment to distinguish the four emotions of normal, joy, sadness and angry, fifteen of the total 56 pitch parameters were selected and showed the best recognition performance when fused with cepstrum and delta cepstrum coefficients. This is a 48.9% reduction in the error of emotion recognition system using only pitch parameters.

2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템 (Emotion Recognition and Expression System of Robot Based on 2D Facial Image)

  • 이동훈;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

Emotion Recognition Implementation with Multimodalities of Face, Voice and EEG

  • Udurume, Miracle;Caliwag, Angela;Lim, Wansu;Kim, Gwigon
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.174-180
    • /
    • 2022
  • Emotion recognition is an essential component of complete interaction between human and machine. The issues related to emotion recognition are a result of the different types of emotions expressed in several forms such as visual, sound, and physiological signal. Recent advancements in the field show that combined modalities, such as visual, voice and electroencephalography signals, lead to better result compared to the use of single modalities separately. Previous studies have explored the use of multiple modalities for accurate predictions of emotion; however the number of studies regarding real-time implementation is limited because of the difficulty in simultaneously implementing multiple modalities of emotion recognition. In this study, we proposed an emotion recognition system for real-time emotion recognition implementation. Our model was built with a multithreading block that enables the implementation of each modality using separate threads for continuous synchronization. First, we separately achieved emotion recognition for each modality before enabling the use of the multithreaded system. To verify the correctness of the results, we compared the performance accuracy of unimodal and multimodal emotion recognitions in real-time. The experimental results showed real-time user emotion recognition of the proposed model. In addition, the effectiveness of the multimodalities for emotion recognition was observed. Our multimodal model was able to obtain an accuracy of 80.1% as compared to the unimodality, which obtained accuracies of 70.9, 54.3, and 63.1%.

SYMMER: A Systematic Approach to Multiple Musical Emotion Recognition

  • Lee, Jae-Sung;Jo, Jin-Hyuk;Lee, Jae-Joon;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제11권2호
    • /
    • pp.124-128
    • /
    • 2011
  • Music emotion recognition is currently one of the most attractive research areas in music information retrieval. In order to use emotion as clues when searching for a particular music, several music based emotion recognizing systems are fundamentally utilized. In order to maximize user satisfaction, the recognition accuracy is very important. In this paper, we develop a new music emotion recognition system, which employs a multilabel feature selector and multilabel classifier. The performance of the proposed system is demonstrated using novel musical emotion data.

얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구 (A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition)

  • 조인구;공연우;전소이;조서영;이도훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

음성을 이용한 화자 및 문장독립 감정인식 (Speaker and Context Independent Emotion Recognition using Speech Signal)

  • 강면구;김원구
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

음성신호를 이용한 감성인식에서의 패턴인식 방법 (The Pattern Recognition Methods for Emotion Recognition with Speech Signal)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제12권3호
    • /
    • pp.284-288
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

음성신호를 이용한 감성인식에서의 패턴인식 방법 (The Pattern Recognition Methods for Emotion Recognition with Speech Signal)

  • 박창현;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2006년도 춘계학술대회 학술발표 논문집 제16권 제1호
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF