• Title/Summary/Keyword: Emotional recognition

Search Result 509, Processing Time 0.029 seconds

Analysis of Electroencephalogram Electrode Position and Spectral Feature for Emotion Recognition (정서 인지를 위한 뇌파 전극 위치 및 주파수 특징 분석)

  • Chung, Seong-Youb;Yoon, Hyun-Joong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.35 no.2
    • /
    • pp.64-70
    • /
    • 2012
  • This paper presents a statistical analysis method for the selection of electroencephalogram (EEG) electrode positions and spectral features to recognize emotion, where emotional valence and arousal are classified into three and two levels, respectively. Ten experiments for a subject were performed under three categorized IAPS (International Affective Picture System) pictures, i.e., high valence and high arousal, medium valence and low arousal, and low valence and high arousal. The electroencephalogram was recorded from 12 sites according to the international 10~20 system referenced to Cz. The statistical analysis approach using ANOVA with Tukey's HSD is employed to identify statistically significant EEG electrode positions and spectral features in the emotion recognition.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Analysis on the Preference for each Emotional Component in Elementary School Space (초등학교 공간의 감성화 구성요소별 선호도 분석)

  • Sim, Hwa-Jeung;Lee, Yong-Hwan
    • Journal of the Architectural Institute of Korea Planning & Design
    • /
    • v.34 no.3
    • /
    • pp.3-10
    • /
    • 2018
  • The purpose of this study is to suggest the direction and recognition for applying to component of Emotion of the elementary school space with characteristics of child development. For the accomplishment of the study is to deduce types of emotional component and characteristics of child development based on literature and advanced research related to 'Child development and behavior', 'The elementary school space', and concept of 'children' and 'emotion'. In addition, The level of recognition of teachers and students about creation plan of school space by types of emotion component and preference and relationships of students on emotion component of elementary school space is investigated. The space environment has great influence in childhood going through big changes in physical, cognitive, emotional and social ways, Providing space environment built with emotion component such as 'affordance', 'diversity', 'territoriality', and 'relationships' considering characteristics of child development is most important of all, In particular, when building indoor space in elementary schools where students going through various development stages live, providing friendly environments for emotion of children put top priority on students in the decision-making process and guaranteed the participation of students is expected.

Speaker and Context Independent Emotion Recognition using Speech Signal (음성을 이용한 화자 및 문장독립 감정인식)

  • 강면구;김원구
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF

3D Emotional Avatar Creation and Animation using Facial Expression Recognition (표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션)

  • Cho, Taehoon;Jeong, Joong-Pill;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

Emotional Influence and it's Implications of Childhood Housing Environment (유년기 주거환경의 정서적 영향과 그 의미에 관한 연구)

  • 정준현
    • Journal of the Korean housing association
    • /
    • v.11 no.4
    • /
    • pp.43-51
    • /
    • 2000
  • The study analyzed emotional influence and its implications of childhood housing environment with the environmental autobiography method. 222 essays of students in T University based on childhood memory are collected in identifying physical environment, emotional meaning and the value of places to them. Findings indicate that childhood housing environment is recognized as a influencing factor for making individual personality and the view of the world. The study also found that emotional recognition for the housing environment is given much weight in the indoor places, and to be the most affirmative adaptation attitude. Emotional influence of housing environment is significantly different depending on a residential area and house type, and each places of housing environment is charged in the variety emotional characters.

  • PDF

Ecological Variables on Children's Emotional Intelligence (아동의 정서지능에 관련된 생태학적 변인 연구)

  • Jang, Mi-Seon;Moon, Hyuk-Jun
    • Journal of the Korean Home Economics Association
    • /
    • v.44 no.4 s.218
    • /
    • pp.11-21
    • /
    • 2006
  • The purpose of this study was to identify the ecological variables related with children's emotional intelligence, examine their recognition of all the variables affecting their emotional intelligence and classify the variables into the categories of children (gender, grade, self-efficacy), home environment (employed mother or unemployed mother, communication between parents and child, type of family composition, number of siblings), and peer group environment (peer group). The study subjects were 680 elementary school students. Data were analyzed via t-test, F-test, correlation, and multiple regression. The results of this study were as follows. First, emotional intelligence showed significant difference and relationship among the children variables, home environment variables, and peer group environment variable. (Ed- also note the absence of 'fourth' above) Second, emotional intelligence in children was relatively reviewed by the above three variables and the most affecting variable was self-efficacy in children.

The Effect of Emotional Expression Change, Delay, and Background at Retrieval on Face Recognition (얼굴자극의 검사단계 표정변화와 검사 지연시간, 자극배경이 얼굴재인에 미치는 효과)

  • Youngshin Park
    • Korean Journal of Culture and Social Issue
    • /
    • v.20 no.4
    • /
    • pp.347-364
    • /
    • 2014
  • The present study was conducted to investigate how emotional expression change, test delay, and background influence on face recognition. In experiment 1, participants were presented with negative faces at study phase and administered for standard old-new recognition test including targets of negative and neutral expression for the same faces. In experiment 2, participants were studied negative faces and tested by old-new face recognition test with targets of negative and positive faces. In experiment 3, participants were presented with neutral faces at study phase and had to identify the same faces with no regard for negative and neutral expression at face recognition test. In all three experiments, participants were assigned into either immediate test or delay test, and target faces were presented in both white and black background. Results of experiments 1 and 2 indicated higher rates for negative faces than neutral or positive faces. Facial expression consistency enhanced face recognition memory. In experiment 3, the superiority of facial expression consistency were demonstrated by higher rates for neutral faces at recognition test. If facial expressions were consistent across encoding and retrieval, memory performance on face recognition were enhanced in all three experiments. And the effect of facial expression change have different effects on background conditions. The findings suggest that facial expression change make face identification hard, and time and background also affect on face recognition.

  • PDF

A Study on Development Evaluation Modeling Internal Landscape in Tunnel Considering Human Sensitivity Engineering (감성공학을 고려한 터널 내부경관 평가 모형개발에 관한 연구)

  • Wang, Yi-Wau;Kum, Ki-Jung;Son, Seung-Neo;Yu, Jai-Sang
    • International Journal of Highway Engineering
    • /
    • v.12 no.1
    • /
    • pp.9-20
    • /
    • 2010
  • This study was intended to identify, among various characteristics of tunnel, the relationship between the design factors comprising the driver's psychological stability, easiness and the sensitivity and then to suggest the mechanism for evaluating the tunnel view, and to that end, the study attempted to evaluate the relations between the physical elements comprising the tunnel shape and the variation of driver's emotional recognition, thereby proposing the measures to create the scenic environment. As a result of LISREL modeling to identify the characteristics of emotional recognition to tunnel view, the elements affecting tunnel view appeared to be emotional image created by the combination of elements comprising the tunnel view. Such emotional image can be explained by design elements and individual characteristics, and the effect of design element appeared to be greater than individual characteristics. The relations between individual characteristics and design element appeared to be positive (+) and the relations between the "safety" and "variability" was significant. And the "safety" have had greater effect on view recognition than "variability", indicating that the drivers tend to give more importance to "safety", but also require the "variability"on the other hand.

On the Implementation of a Facial Animation Using the Emotional Expression Techniques (FAES : 감성 표현 기법을 이용한 얼굴 애니메이션 구현)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.147-155
    • /
    • 2005
  • In this paper, we present a FAES(a Facial Animation with Emotion and Speech) system for speech-driven face animation with emotions. We animate face cartoons not only from input speech, but also based on emotions derived from speech signal. And also our system can ensure smooth transitions and exact representation in animation. To do this, after collecting the training data, we have made the database using SVM(Support Vector Machine) to recognize four different categories of emotions: neutral, dislike, fear and surprise. So that, we can make the system for speech-driven animation with emotions. Also, we trained on Korean young person and focused on only Korean emotional face expressions. Experimental results of our system demonstrate that more emotional areas expanded and the accuracies of the emotional recognition and the continuous speech recognition are respectively increased 7% and 5% more compared with the previous method.

  • PDF