• Title/Summary/Keyword: classification of emotion

Search Result 292, Processing Time 0.027 seconds

Action recognition, hand gesture recognition, and emotion recognition using text classification method (Text classification 방법을 사용한 행동 인식, 손동작 인식 및 감정 인식)

  • Kim, Gi-Duk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.213-216
    • /
    • 2021
  • 본 논문에서는 Text Classification에 사용된 딥러닝 모델을 적용하여 행동 인식, 손동작 인식 및 감정 인식 방법을 제안한다. 먼저 라이브러리를 사용하여 영상에서 특징 추출 후 식을 적용하여 특징의 벡터를 저장한다. 이를 Conv1D, Transformer, GRU를 결합한 모델에 학습시킨다. 이 방법을 통해 하나의 딥러닝 모델을 사용하여 다양한 분야에 적용할 수 있다. 제안한 방법을 사용해 SYSU 3D HOI 데이터셋에서 99.66%, eNTERFACE' 05 데이터셋에 대해 99.0%, DHG-14 데이터셋에 대해 95.48%의 클래스 분류 정확도를 얻을 수 있었다.

  • PDF

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

An Implementation of a Classification and Recommendation Method for a Music Player Using Customized Emotion (맞춤형 감성 뮤직 플레이어를 위한 음악 분류 및 추천 기법 구현)

  • Song, Yu-Jeong;Kang, Su-Yeon;Ihm, Sun-Young;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.195-200
    • /
    • 2015
  • Recently, most people use android based smartphones and we can find music players in any smartphones. However, it's hard to find a personalized music player which applies user's preference. In this paper, we propose an emotion-based music player, which analyses and classifies the music with user's emotion, recommends the music, applies the user's preference, and visualizes the music by color. Through the proposed music player, user could be able to select musics easily and use an optimized application.

A Study on the Image Scale through the Classification of Emotion in Web Site (웹사이트 사용자 감성유형 분류를 통한 감성척도 연구)

  • Hong, Soo-Youn;Lee, Hyun-Ju;Jin, Ki-Nam
    • Science of Emotion and Sensibility
    • /
    • v.12 no.1
    • /
    • pp.1-10
    • /
    • 2009
  • The purpose of this study is to find out the relationship between the design factor and the sensitivity in web site. The classification of sensitivity-types consists of the research of books and the survey, and the language specialist's review and the analysis of factor. The research of the Image Scale accomplished through the analysis of the result of sensitivity-types. The major findings of the analysis are summarized as follows. The webpage sensitivity-types are classified into the 7 types, namely 'refreshment', 'calm', 'refinement', 'strongness', 'youth', 'uniqueness', 'futurity'. As a result of analyzing of similarity between the adjectives by multiple standards, the web site Image Scale space consists of the axis between 'heavy-light' and 'soft-hard'. As a result of the research of relationship between the web site design factor and the emotion, the color and the layout influenced into 'soft-hard' much, and the light and the color influenced into 'heavy-light' much.

  • PDF

Development of Personalized Media Contents Curation System based on Emotional Information (감성 정보 기반 맞춤형 미디어콘텐츠 큐레이션 시스템 개발)

  • Im, Ji-Hui;Chang, Du-Seong;Choe, Ho-Seop;Ock, Cheol-Young
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.12
    • /
    • pp.181-191
    • /
    • 2016
  • We analyzed the search word of the media content in the IPTV service, and as a result we found that an important factor is general meta information as well as content(material, plot, etc.) and emotion information in the media content selection criteria of customers. Therefore, in this research, in order to efficiently provide various media contents of IPTV to users, we designed the emotion classification system for utilizing the emotion information of the media content. Next, we proposed 'personalized media contents curation system based on emotion information' for organizing the media contents, through the various processing steps. Finally, to demonstrate the effectiveness of this system, we conducted a user satisfaction survey(72.0 points). In addition, the results of comparing the results based on popularity and the results of the proposed system showed that the ratio leading to the actual users' viewing behavior was 10 times higher.

A Comparison of Effective Feature Vectors for Speech Emotion Recognition (음성신호기반의 감정인식의 특징 벡터 비교)

  • Shin, Bo-Ra;Lee, Soek-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.10
    • /
    • pp.1364-1369
    • /
    • 2018
  • Speech emotion recognition, which aims to classify speaker's emotional states through speech signals, is one of the essential tasks for making Human-machine interaction (HMI) more natural and realistic. Voice expressions are one of the main information channels in interpersonal communication. However, existing speech emotion recognition technology has not achieved satisfactory performances, probably because of the lack of effective emotion-related features. This paper provides a survey on various features used for speech emotional recognition and discusses which features or which combinations of the features are valuable and meaningful for the emotional recognition classification. The main aim of this paper is to discuss and compare various approaches used for feature extraction and to propose a basis for extracting useful features in order to improve SER performance.

Extracting and Clustering of Story Events from a Story Corpus

  • Yu, Hye-Yeon;Cheong, Yun-Gyung;Bae, Byung-Chull
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3498-3512
    • /
    • 2021
  • This article describes how events that make up text stories can be represented and extracted. We also address the results from our simple experiment on extracting and clustering events in terms of emotions, under the assumption that different emotional events can be associated with the classified clusters. Each emotion cluster is based on Plutchik's eight basic emotion model, and the attributes of the NLTK-VADER are used for the classification criterion. While comparisons of the results with human raters show less accuracy for certain emotion types, emotion types such as joy and sadness show relatively high accuracy. The evaluation results with NRC Word Emotion Association Lexicon (aka EmoLex) show high accuracy values (more than 90% accuracy in anger, disgust, fear, and surprise), though precision and recall values are relatively low.

Speech Emotion Recognition by Speech Signals on a Simulated Intelligent Robot (모의 지능로봇에서 음성신호에 의한 감정인식)

  • Jang, Kwang-Dong;Kwon, Oh-Wook
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.163-166
    • /
    • 2005
  • We propose a speech emotion recognition method for natural human-robot interface. In the proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes pitch, jitter, duration, and rate of speech. Finally a patten classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5different directions. Experimental results show that the proposed method yields 59% classification accuracy while human classifiers give about 50%accuracy, which confirms that the proposed method achieves performance comparable to a human.

  • PDF

The study of Emotion Traits in Sasang Constitution by Several Mood scale (정서 관련 척도를 이용한 사상체질의 감정 특성 요인 연구)

  • Kim, Woo-Chul;Kim, Kyeong-Su;Kim, Kyeong-Ok
    • Journal of Oriental Neuropsychiatry
    • /
    • v.22 no.4
    • /
    • pp.63-75
    • /
    • 2011
  • Objectives : One's mind is turned over by environment and personal relationship. This Emotion is called Chiljung in Oriental Medicine. Sasang Constitution is sorted each Emotion by Nature & Emotion(性情). So, this study aimed at figuring out the relations on Sasang Constitution, and emotion traits of oriental medicine students by EEQ and CISS(as named Mood scale). Methods : 199 students of Dongshin university oriental medicine were tested by Questionnaire for Sasang Constitution ClassificationII(QSCCII) and Mood scale. In this study is used 156 students' data, except 43 students' one for research. 156 students are classified four groups by QSCCII. The degree of emotion was determined by Mood scale. These data ware analyzed by frequency, t-test, ANOVA, Multiple comparison, Correlation, Regression with SPSS windows 15.0. Results : 1. Soeumin has high score on EEQ more than Soyangin. 2. Sasang constitution make no difference on CISS, except emotion-oriented coping in not classify group. 3. It has influence on Emotional express by Sasang constotution that Task-oriented coping, EEQ and CISS. Conclusions : Sasang constitution has significant difference on Emotional express.

Speech Emotion Recognition on a Simulated Intelligent Robot (모의 지능로봇에서의 음성 감정인식)

  • Jang Kwang-Dong;Kim Nam;Kwon Oh-Wook
    • MALSORI
    • /
    • no.56
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF