• Title/Summary/Keyword: Emotion Classification

Search Result 292, Processing Time 0.028 seconds

Classification of Negative Emotions based on Arousal Score and Physiological Signals using Neural Network (신경망을 이용한 다중 심리-생체 정보 기반의 부정 감성 분류)

  • Kim, Ahyoung;Jang, Eun-Hye;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.21 no.1
    • /
    • pp.177-186
    • /
    • 2018
  • The mechanism of emotion is complex and influenced by a variety of factors, so that it is crucial to analyze emotion in broad and diversified perspectives. In this study, we classified neutral and negative emotions(sadness, fear, surprise) using arousal evaluation, which is one of the psychological evaluation scales, as well as physiological signals. We have not only revealed the difference between physiological signals coupled to the emotions, but also assessed how accurate these emotions can be classified by our emotional recognizer based on neural network algorithm. A total of 146 participants(mean age $20.1{\pm}4.0$, male 41%) were emotionally stimulated while their physiological signals of the electrocardiogram, blood flow, and dermal activity were recorded. In addition, the participants evaluated their psychological states on the emotional rating scale in response to the emotional stimuli. Heart rate(HR), standard deviation(SDNN), blood flow(BVP), pulse wave transmission time(PTT), skin conduction level(SCL) and skin conduction response(SCR) were calculated before and after the emotional stimulation. As a result, the difference between physiological responses was verified corresponding to the emotions, and the highest emotion classification performance of 86.9% was obtained using the combined analysis of arousal and physiological features. This study suggests that negative emotion can be categorized by psychological and physiological evaluation along with the application of machine learning algorithm, which can contribute to the science and technology of detecting human emotion.

Improvement of Facial Emotion Recognition Performance through Addition of Geometric Features (기하학적 특징 추가를 통한 얼굴 감정 인식 성능 개선)

  • Hoyoung Jung;Hee-Il Hahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.155-161
    • /
    • 2024
  • In this paper, we propose a new model by adding landmark information as a feature vector to the existing CNN-based facial emotion classification model. Facial emotion classification research using CNN-based models is being studied in various ways, but the recognition rate is very low. In order to improve the CNN-based models, we propose algorithms that improves facial expression classification accuracy by combining the CNN model with a landmark-based fully connected network obtained by ASM. By including landmarks in the CNN model, the recognition rate was improved by several percent, and experiments confirmed that further improved results could be obtained by adding FACS-based action units to the landmarks.

EEG Dimensional Reduction with Stack AutoEncoder for Emotional Recognition using LSTM/RNN (LSTM/RNN을 사용한 감정인식을 위한 스택 오토 인코더로 EEG 차원 감소)

  • Aliyu, Ibrahim;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.4
    • /
    • pp.717-724
    • /
    • 2020
  • Due to the important role played by emotion in human interaction, affective computing is dedicated in trying to understand and regulate emotion through human-aware artificial intelligence. By understanding, emotion mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction will be better managed as they are all associated with emotion. Various studies for emotion recognition have been conducted to solve these problems. In applying machine learning for the emotion recognition, the efforts to reduce the complexity of the algorithm and improve the accuracy are required. In this paper, we investigate emotion Electroencephalogram (EEG) feature reduction and classification using Stack AutoEncoder (SAE) and Long-Short-Term-Memory/Recurrent Neural Networks (LSTM/RNN) classification respectively. The proposed method reduced the complexity of the model and significantly enhance the performance of the classifiers.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyun;Sim Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.3
    • /
    • pp.284-288
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

The Pattern Recognition Methods for Emotion Recognition with Speech Signal (음성신호를 이용한 감성인식에서의 패턴인식 방법)

  • Park Chang-Hyeon;Sim Gwi-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.347-350
    • /
    • 2006
  • In this paper, we apply several pattern recognition algorithms to emotion recognition system with speech signal and compare the results. Firstly, we need emotional speech databases. Also, speech features for emotion recognition is determined on the database analysis step. Secondly, recognition algorithms are applied to these speech features. The algorithms we try are artificial neural network, Bayesian learning, Principal Component Analysis, LBG algorithm. Thereafter, the performance gap of these methods is presented on the experiment result section. Truly, emotion recognition technique is not mature. That is, the emotion feature selection, relevant classification method selection, all these problems are disputable. So, we wish this paper to be a reference for the disputes.

  • PDF

An Implementation of a Classification and Recommendation Method for a Music Player Using Customized Emotion (맞춤형 감성 뮤직 플레이어를 위한 음악 분류 및 추천 기법 구현)

  • Song, Yu-Jeong;Kang, Su-Yeon;Ihm, Sun-Young;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.195-200
    • /
    • 2015
  • Recently, most people use android based smartphones and we can find music players in any smartphones. However, it's hard to find a personalized music player which applies user's preference. In this paper, we propose an emotion-based music player, which analyses and classifies the music with user's emotion, recommends the music, applies the user's preference, and visualizes the music by color. Through the proposed music player, user could be able to select musics easily and use an optimized application.

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

Action recognition, hand gesture recognition, and emotion recognition using text classification method (Text classification 방법을 사용한 행동 인식, 손동작 인식 및 감정 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.213-216
    • /
    • 2021
  • 본 논문에서는 Text Classification에 사용된 딥러닝 모델을 적용하여 행동 인식, 손동작 인식 및 감정 인식 방법을 제안한다. 먼저 라이브러리를 사용하여 영상에서 특징 추출 후 식을 적용하여 특징의 벡터를 저장한다. 이를 Conv1D, Transformer, GRU를 결합한 모델에 학습시킨다. 이 방법을 통해 하나의 딥러닝 모델을 사용하여 다양한 분야에 적용할 수 있다. 제안한 방법을 사용해 SYSU 3D HOI 데이터셋에서 99.66%, eNTERFACE' 05 데이터셋에 대해 99.0%, DHG-14 데이터셋에 대해 95.48%의 클래스 분류 정확도를 얻을 수 있었다.

  • PDF