• Title/Summary/Keyword: Emotion Classification

Search Result 302, Processing Time 0.023 seconds

Implementation of the Speech Emotion Recognition System in the ARM Platform (ARM 플랫폼 기반의 음성 감성인식 시스템 구현)

  • Oh, Sang-Heon;Park, Kyu-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1530-1537
    • /
    • 2007
  • In this paper, we implemented a speech emotion recognition system that can distinguish human emotional states from recorded speech captured by a single microphone and classify them into four categories: neutrality, happiness, sadness and anger. In general, a speech recorded with a microphone contains background noises due to the speaker environment and the microphone characteristic, which can result in serious system performance degradation. In order to minimize the effect of these noises and to improve the system performance, a MA(Moving Average) filter with a relatively simple structure and low computational complexity was adopted. Then a SFS(Sequential Forward Selection) feature optimization method was implemented to further improve and stabilize the system performance. For speech emotion classification, a SVM pattern classifier is used. The experimental results indicate the emotional classification performance around 65% in the computer simulation and 62% on the ARM platform.

  • PDF

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.

Prediction of Citizens' Emotions on Home Mortgage Rates Using Machine Learning Algorithms (기계학습 알고리즘을 이용한 주택 모기지 금리에 대한 시민들의 감정예측)

  • Kim, Yun-Ki
    • Journal of Cadastre & Land InformatiX
    • /
    • v.49 no.1
    • /
    • pp.65-84
    • /
    • 2019
  • This study attempted to predict citizens' emotions regarding mortgage rates using machine learning algorithms. To accomplish the research purpose, I reviewed the related literature and then set up two research questions. To find the answers to the research questions, I classified emotions according to Akman's classification and then predicted citizens' emotions on mortgage rates using six machine learning algorithms. The results showed that AdaBoost was the best classifier in all evaluation categories. However, the performance level of Naive Bayes was found to be lower than those of other classifiers. Also, this study conducted a ROC analysis to identify which classifier predicts each emotion category well. The results demonstrated that AdaBoost was the best predictor of the residents' emotions on home mortgage rates in all emotion categories. However, in the sadness class, the performance levels of the six algorithms used in this study were much lower than those in the other emotion categories.

A multidisciplinary analysis of the main actor's conflict emotions in Animation film's Turning Point (장편 애니메이션 극적전환점에서 주인공의 갈등 정서에 대한 다학제적 분석)

  • Lee, Tae Rin;Kim, Jong Dae;Liu, Guoxu;Ingabire, Jesse;Kim, Jae Ho
    • Korea Science and Art Forum
    • /
    • v.34
    • /
    • pp.275-290
    • /
    • 2018
  • The study began with the recognition that the animations movie need objective and reasonable methods to classify conflicts in visual to analyze conflicts centering on narratives. Study the emotions of the hero in conflict. The purpose of the study is to analyze conflict intensity and emotion. The results and contents of the study are as follows. First, we found a Turning Point and suggested a conflict classification model (Conflict 6B Model). Second, Based on the conflict classification model, the conflict based shot DB was extracted. Third, I found strength and emotion in inner and super personal conflicts. Fourth, Experiments and tests of strength and emotion were conducted in internal and super personal conflicts. The results of this study are metadata extracted from the emotional research on conflict. It is expected to be applied to video indexing of conflicts.

A research on the emotion classification and precision improvement of EEG(Electroencephalogram) data using machine learning algorithm (기계학습 알고리즘에 기반한 뇌파 데이터의 감정분류 및 정확도 향상에 관한 연구)

  • Lee, Hyunju;Shin, Dongil;Shin, Dongkyoo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.27-36
    • /
    • 2019
  • In this study, experiments on the improvement of the emotion classification, analysis and accuracy of EEG data were proceeded, which applied DEAP (a Database for Emotion Analysis using Physiological signals) dataset. In the experiment, total 32 of EEG channel data measured from 32 of subjects were applied. In pre-processing step, 256Hz sampling tasks of the EEG data were conducted, each wave range of the frequency (Hz); Theta, Slow-alpha, Alpha, Beta and Gamma were then extracted by using Finite Impulse Response Filter. After the extracted data were classified through Time-frequency transform, the data were purified through Independent Component Analysis to delete artifacts. The purified data were converted into CSV file format in order to conduct experiments of Machine learning algorithm and Arousal-Valence plane was used in the criteria of the emotion classification. The emotions were categorized into three-sections; 'Positive', 'Negative' and 'Neutral' meaning the tranquil (neutral) emotional condition. Data of 'Neutral' condition were classified by using Cz(Central zero) channel configured as Reference channel. To enhance the accuracy ratio, the experiment was performed by applying the attributes selected by ASC(Attribute Selected Classifier). In "Arousal" sector, the accuracy of this study's experiments was higher at "32.48%" than Koelstra's results. And the result of ASC showed higher accuracy at "8.13%" compare to the Liu's results in "Valence". In the experiment of Random Forest Classifier adapting ASC to improve accuracy, the higher accuracy rate at "2.68%" was confirmed than Total mean as the criterion compare to the existing researches.

Arousal and Valence Classification Model Based on Long Short-Term Memory and DEAP Data for Mental Healthcare Management

  • Choi, Eun Jeong;Kim, Dong Keun
    • Healthcare Informatics Research
    • /
    • v.24 no.4
    • /
    • pp.309-316
    • /
    • 2018
  • Objectives: Both the valence and arousal components of affect are important considerations when managing mental healthcare because they are associated with affective and physiological responses. Research on arousal and valence analysis, which uses images, texts, and physiological signals that employ deep learning, is actively underway; research investigating how to improve the recognition rate is needed. The goal of this research was to design a deep learning framework and model to classify arousal and valence, indicating positive and negative degrees of emotion as high or low. Methods: The proposed arousal and valence classification model to analyze the affective state was tested using data from 40 channels provided by a dataset for emotion analysis using electrocardiography (EEG), physiological, and video signals (the DEAP dataset). Experiments were based on 10 selected featured central and peripheral nervous system data points, using long short-term memory (LSTM) as a deep learning method. Results: The arousal and valence were classified and visualized on a two-dimensional coordinate plane. Profiles were designed depending on the number of hidden layers, nodes, and hyperparameters according to the error rate. The experimental results show an arousal and valence classification model accuracy of 74.65 and 78%, respectively. The proposed model performed better than previous other models. Conclusions: The proposed model appears to be effective in analyzing arousal and valence; specifically, it is expected that affective analysis using physiological signals based on LSTM will be possible without manual feature extraction. In a future study, the classification model will be adopted in mental healthcare management systems.

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Real-time classification system of emotion image using physiological signal (생리신호에 의한 감성 이미지 실시간 분류 시스템 개발)

  • Lee, Jeong-Nyeon;Gwak, Dong-Min;Jeong, Bong-Cheon;Jeon, Gi-Hyeok;Hwang, Min-Cheol
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.232-235
    • /
    • 2009
  • 본 연구에서는 실시간으로 변화하는 사용자의 감성을 평가하여 각성 또는 이완으로 분류된 시선 정보 이미지를 저장하는 시스템을 구현하고자 한다. 사용자의 감성을 분류하기 위한 요소는 Larson과 Diner 가 정의한 2 차원 감성모델에서 각성, 이완 요소를 사용한다. 감성 상태를 분류하기 위하여 자율 신경계 중 착용과 휴대가 간편한 PPG 센서를 사용하며, PPG 를 분석하기 위한 변수로는 진폭의 양과 초당 Peak 의 빈도수를 사용한다. 머리에 고정할 수 있는 캠을 사용하여 사용자가 바라보는 시선 정보를 획득하고, 클라이언트 컴퓨터는 획득된 시선 정보를 UDP 통신을 사용해 서버 컴퓨터로 전송하는 시스템이다. 320(pixel)*240(pixel)*32(bit)인 영상 데이터를 1/30 로 압축하여 전송하며, 각성과 이완으로 분류되는 시점의 영상을 블록화하여 JPEG 이미지로 저장한다. 본 시스템은 실시간으로 변화되는 사용자의 감성 상태를 파악하여 이미지를 전송하고 서버 컴퓨터에 저장함으로써 당시 사용자가 느꼈던 감성들에 대해 피드백을 주고자 하는데 의의가 있다.

  • PDF

RECOGNIZING SIX EMOTIONAL STATES USING SPEECH SIGNALS

  • Kang, Bong-Seok;Han, Chul-Hee;Youn, Dae-Hee;Lee, Chungyong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.366-369
    • /
    • 2000
  • This paper examines three algorithms to recognize speaker's emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB(Maximum-Likeligood Bayes), NN(Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. We recorded a corpus of emotional speech data and performed the subjective evaluation for the data. The subjective recognition result was 56% and was compared with the classifiers' recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3% and 89.1% respectively, for the speaker dependent, and context-independent classification.

  • PDF

A Study on Classification of Four Emotions using EEG (뇌파를 이용한 4가지 감정 분류에 관한 연구)

  • 강동기;김동준;김흥환;고한우
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2001.11a
    • /
    • pp.87-90
    • /
    • 2001
  • 본 연구에서는 감성 평가 시스템에 가장 적합한 파라미터를 찾기 위하여 3가지 뇌파 파라미터를 이용하여 감정 분류 실험을 하였다. 뇌파 파라미터는 선형예측기계수(linear predictor coefficients)와 FFT 스펙트럼 및 AR 스펙트럼의 밴드별 상호상관계수(cross-correlation coefficients)를 이용하였으며, 감정은 relaxation, joy, sadness, irritation으로 설정하였다. 뇌파 데이터는 대학의 연극동아리 학생 4명을 대상으로 수집하였으며, 전극 위치는 Fp1, Fp2, F3, F4, T3, T4, P3, P4, O1, O2를 사용하였다. 수집된 뇌파 데이터는 전처리를 거친 후 특징 파라미터를 추출하고 패턴 분류기로 사용된 신경회로망(neural network)에 입력하여 감정 분류를 하였다. 감정 분류실험 결과 선형예측기계수를 이용하는 것이 다른 2가지 보다 좋은 성능을 나타내었다.

  • PDF