• Title/Summary/Keyword: Emotion Classifier

Search Result 44, Processing Time 0.024 seconds

Development of Emotion Recongition System Using Facial Image (얼굴 영상을 이용한 감정 인식 시스템 개발)

  • Kim, M.H.;Joo, Y.H.;Park, J.B.;Lee, J.;Cho, Y.J.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.191-196
    • /
    • 2005
  • Although the technology for emotion recognition is important one which was demanded in various fields, it still remains as the unsolved problems. Especially, there is growing demand for emotion recognition technology based on racial image. The facial image based emotion recognition system is complex system comprised of various technologies. Therefore, various techniques such that facial image analysis, feature vector extraction, pattern recognition technique, and etc, are needed in order to develop this system. In this paper, we propose new emotion recognition system based un previously studied facial image analysis technique. The proposed system recognizes the emotion by using the fuzzy classifier. The facial image database is built up and the performance of the proposed system is verified by using built database.

Recognition of Facial Expressions using Geometrical Features (기하학적인 특징 추출을 이용한 얼굴 표정인식)

  • 신영숙;이일병
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1997.11a
    • /
    • pp.205-208
    • /
    • 1997
  • 본 연구는 기하학적인 특징 추출을 기반으로 얼굴 영상에서 얼굴표정을 인식하는 방법을 제시한다. 얼굴표정은 3가지 그룹으로 제한한다(무표정, 기쁨, 놀람). 표정에 관련된 기본 특징들을 추출하기 위하여 얼굴표정정영상에서 눈높이, 눈폭, 입높이, 입폭을 추출하여 데이터를 분석한다. 분석결과로 눈높이, 입폭, 입높이가 표정을 분별하는 주요 특징으로 추출되었다. 각 표정별 눈높이, 입폭, 입높이가 표정을 분별하는 주요 특징으로 추출되었다. 각 표정별 눈높이, 입폭, 입높이의 평균과 표준편차를 구하여 표정별 표준 템플릿을 작성하였다. 표정인식 방법은 최소 근접 분류기(nearest neighbor classifier)를 사용하였다. 새로운 얼굴표정 영상과 표준 템플릿간의 유클리드 거리를 계산하여 새로운 표정에 대하여 83%인식률을 얻었다.

  • PDF

Emotion Recognition using Bio-signal Measurements & K-Means Classifier (생체신호 분석과 K-Means 분류 알고리즘을 이용한 감정 인식)

  • Cha, Sang-hun;Kim, Sung-Jae;Kim, Da-young;Kim, Kwang-baek;Yun, Sang-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.386-388
    • /
    • 2018
  • 본 논문은 사회적 상호작용 결여로 감정 기복이 심하고 스트레스로 인해 정서불안 증세를 보이는 자폐 범주성 장애아동의 감정 상태를 인식하기 위한 목적으로 4가지 감정 자극에 대하여 생체신호를 분석하고 K-Means 알고리즘을 적용하여 획득한 정보로부터 감정 상태를 인식하는 방법을 제안한다. 실험구성은 참가자가 주어지는 감정자극 영상을 시청하는 동안 맥파 및 피부전도 센서를 이용하여 생체신호를 측정한 후 자율신경 비율을 나타내는 LF/HF의 심박 정보와 피부 반응 정보를 정량적으로 분석하였고, 추출된 정보로부터 K-Means 알고리즘을 적용하여 감정 상태를 분류하는 과정으로 진행된다. 총 3명의 일반인을 대상으로 실험을 진행하였으며, 4가지 감정 자극에 대한 실험을 수행한 결과, 생체신호 측정을 이용한 감정인식 방법이 제시되는 감정 자극을 충분히 분류할 수 있음을 확인할 수 있었다.

  • PDF

Functional Connectivity with Regions Related to Emotional Regulation is Altered in Emotional Laborers

  • Seokyeong Min;Tae Hun Cho;Soo Hyun Park;Sanghoon Han
    • Science of Emotion and Sensibility
    • /
    • v.25 no.4
    • /
    • pp.63-76
    • /
    • 2022
  • Emotional labor, characterized by a dysfunctional type of emotional regulation called surface acting, has detrimental psychological consequences on employees, including depression and social anxiety. Because such disorders exhibit psychological characteristics manifested through brain activation, previous studies have succeeded in distinguishing individuals with depression and social anxiety from healthy controls using their functional connectivity characteristics. However, it has not been established whether the functional connectivity characteristics associated with emotional labor are distinguishable. Thus, we obtained resting-state fMRI data from participants in the emotion labor (EL) group and control (CTRL) group, and we subjected their whole-brain functional connectivity matrices to a linear support vector machine classifier. Our analysis revealed that the EL and CTRL groups could be successfully distinguished on the basis of individuals' connectivity patterns, and confidence in the classification was correlated with the scores on the depression and social anxiety scales. These results are expected to provide insight on the neurobiological characteristics of emotional labor and enable the sorting of employees undergoing adverse emotional labor utilizing neurobiological observations.

The Design of Feature Selection Classifier based on Physiological Signal for Emotion Detection (감성판별을 위한 생체신호기반 특징선택 분류기 설계)

  • Lee, JeeEun;Yoo, Sun K.
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.206-216
    • /
    • 2013
  • The emotion plays a critical role in human's daily life including learning, action, decision and communication. In this paper, emotion discrimination classifier is designed to reduce system complexity through reduced selection of dominant features from biosignals. The photoplethysmography(PPG), skin temperature, skin conductance, fontal and parietal electroencephalography(EEG) signals were measured during 4 types of movie watching associated with the induction of neutral, sad, fear joy emotions. The genetic algorithm with support vector machine(SVM) based fitness function was designed to determine dominant features among 24 parameters extracted from measured biosignals. It shows maximum classification accuracy of 96.4%, which is 17% higher than that of SVM alone. The minimum error features selected are the mean and NN50 of heart rate variability from PPG signal, the mean of PPG induced pulse transit time, the mean of skin resistance, and ${\delta}$ and ${\beta}$ frequency band powers of parietal EEG. The combination of parietal EEG, PPG, and skin resistance is recommendable in high accuracy instrumentation, while the combinational use of PPG and skin conductance(79% accuracy) is affordable in simplified instrumentation.

A Music Recommendation Method Using Emotional States by Contextual Information

  • Kim, Dong-Joo;Lim, Kwon-Mook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.10
    • /
    • pp.69-76
    • /
    • 2015
  • User's selection of music is largely influenced by private tastes as well as emotional states, and it is the unconsciousness projection of user's emotion. Therefore, we think user's emotional states to be music itself. In this paper, we try to grasp user's emotional states from music selected by users at a specific context, and we analyze the correlation between its context and user's emotional state. To get emotional states out of music, the proposed method extracts emotional words as the representative of music from lyrics of user-selected music through morphological analysis, and learns weights of linear classifier for each emotional features of extracted words. Regularities learned by classifier are utilized to calculate predictive weights of virtual music using weights of music chosen by other users in context similar to active user's context. Finally, we propose a method to recommend some pieces of music relative to user's contexts and emotional states. Experimental results shows that the proposed method is more accurate than the traditional collaborative filtering method.

Implementation of the Speech Emotion Recognition System in the ARM Platform (ARM 플랫폼 기반의 음성 감성인식 시스템 구현)

  • Oh, Sang-Heon;Park, Kyu-Sik
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.11
    • /
    • pp.1530-1537
    • /
    • 2007
  • In this paper, we implemented a speech emotion recognition system that can distinguish human emotional states from recorded speech captured by a single microphone and classify them into four categories: neutrality, happiness, sadness and anger. In general, a speech recorded with a microphone contains background noises due to the speaker environment and the microphone characteristic, which can result in serious system performance degradation. In order to minimize the effect of these noises and to improve the system performance, a MA(Moving Average) filter with a relatively simple structure and low computational complexity was adopted. Then a SFS(Sequential Forward Selection) feature optimization method was implemented to further improve and stabilize the system performance. For speech emotion classification, a SVM pattern classifier is used. The experimental results indicate the emotional classification performance around 65% in the computer simulation and 62% on the ARM platform.

  • PDF

Development of Emotion Recognition Model Using Audio-video Feature Extraction Multimodal Model (음성-영상 특징 추출 멀티모달 모델을 이용한 감정 인식 모델 개발)

  • Jong-Gu Kim;Jang-Woo Kwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.4
    • /
    • pp.221-228
    • /
    • 2023
  • Physical and mental changes caused by emotions can affect various behaviors, such as driving or learning behavior. Therefore, recognizing these emotions is a very important task because it can be used in various industries, such as recognizing and controlling dangerous emotions while driving. In this paper, we attempted to solve the emotion recognition task by implementing a multimodal model that recognizes emotions using both audio and video data from different domains. After extracting voice from video data using RAVDESS data, features of voice data are extracted through a model using 2D-CNN. In addition, the video data features are extracted using a slowfast feature extractor. And the information contained in the audio and video data, which have different domains, are combined into one feature that contains all the information. Afterwards, emotion recognition is performed using the combined features. Lastly, we evaluate the conventional methods that how to combine results from models and how to vote two model's results and a method of unifying the domain through feature extraction, then combining the features and performing classification using a classifier.

Pattern Classification of Four Emotions using EEG (뇌파를 이용한 감정의 패턴 분류 기술)

  • Kim, Dong-Jun;Kim, Young-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.4
    • /
    • pp.23-27
    • /
    • 2010
  • This paper performs emotion classification test to find out the best parameter of electroencyphalogram(EEG) signal. Linear predictor coefficients, band cross-correlation coefficients of fast Fourier transform(FFT) and autoregressive model spectra are used as the parameters of 10-channel EEG signal. A multi-layer neural network is used as the pattern classifier. Four emotions for relaxation, joy, sadness, irritation are induced by four university students of an acting circle. Electrode positions are Fp1, Fp2, F3, F4, T3, T4, P3, P4, O1, O2. As a result, the Linear predictor coefficients showed the best performance.

  • PDF

Development of Emotion-Based Human Interaction Method for Intelligent Robot (지능형 로봇을 위한 감성 기반 휴먼 인터액션 기법 개발)

  • Joo, Young-Hoon;So, Jea-Yun;Sim, Kee-Bo;Song, Min-Kook;Park, Jin-Bae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.5
    • /
    • pp.587-593
    • /
    • 2006
  • This paper is to present gesture analysis for human-robot interaction. Understanding human emotions through gesture is one of the necessary skills for the computers to interact intelligently with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. For efficient operation we used recognizing a gesture with HMM(Hidden Markov Model). We constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile system.