• Title/Summary/Keyword: Human emotion

Search Result 1,210, Processing Time 0.027 seconds

The Korean Tradition of Taegyo for Supporting Prenatal Development: Focusing on Emotion in Taegyo-Singi (정서발달의 관점에서 본 우리나라의 전통태교: 태교신기를 중심으로)

  • Chung, Soon Hwa
    • Human Ecology Research
    • /
    • v.52 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • The purpose of this study was to analyse the principles and methods of Taegyo-Singi with regard to emotion and to review basic informations on Taegyo programs for promoting prenatal development. Taegyo-Singi was analyzed as follows. First, the contents of Taegyo-Singi were classified into principles and methods of Taegyo. Second, the domains of emotion were categorized into emotional perception, emotional expression, emotional understanding, and emotional regulation based on the classification of Mayer and Salovey, and Moon. Third, the contents of Taegyo-Singi were classified into the four domains of emotion. Finally, the reliability and validity of the classification were obtained through inter-rater agreement and analysis of content validity. The results indicated that first, the principles of Taegyo presuppose parental influence on temperament formation, and that the emotional states of the mother in the prenatal and prepregnancy period is the most influential variable in a child's temperament formation. Second, the methods of Taegyo presuppose that the human mind interacts with their behavior. Therefore, through emotional support of family members, 'jon-sim (the serene mind)' and 'chung-sim (the mind from rectitude)' are the key methods of Taegyo. This means that the Korean tradition of Taegyo focused on the emotional domain of development, especially emotional regulation. This coincides with the emotion-focused temperament theory that individual differences in temperament reflect individual differences in emotion.

Weighted Soft Voting Classification for Emotion Recognition from Facial Expressions on Image Sequences (이미지 시퀀스 얼굴표정 기반 감정인식을 위한 가중 소프트 투표 분류 방법)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1175-1186
    • /
    • 2017
  • Human emotion recognition is one of the promising applications in the era of artificial super intelligence. Thus far, facial expression traits are considered to be the most widely used information cues for realizing automated emotion recognition. This paper proposes a novel facial expression recognition (FER) method that works well for recognizing emotion from image sequences. To this end, we develop the so-called weighted soft voting classification (WSVC) algorithm. In the proposed WSVC, a number of classifiers are first constructed using different and multiple feature representations. In next, multiple classifiers are used for generating the recognition result (namely, soft voting) of each face image within a face sequence, yielding multiple soft voting outputs. Finally, these soft voting outputs are combined through using a weighted combination to decide the emotion class (e.g., anger) of a given face sequence. The weights for combination are effectively determined by measuring the quality of each face image, namely "peak expression intensity" and "frontal-pose degree". To test the proposed WSVC, CK+ FER database was used to perform extensive and comparative experimentations. The feasibility of our WSVC algorithm has been successfully demonstrated by comparing recently developed FER algorithms.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Effects of Mother's Emotional Expressiveness and Reaction to Child Negative Emotions on Child Emotional Intelligence (어머니의 정서표현성과 부정적 정서표현에 대한 반응이 아동의 정서지능에 미치는 영향)

  • Kang, Hyun Jee;Lim, Jungha
    • Human Ecology Research
    • /
    • v.53 no.3
    • /
    • pp.265-277
    • /
    • 2015
  • This study examines child emotional intelligence in relation to mother's emotional expressiveness and reaction to child negative emotions. A sample of 352 children and mothers from 4 elementary schools in Seoul and Gyeonggi participated in the study. Child emotional intelligence and mother's reaction to child negative emotions were evaluated by child-report, and mother's emotional expressiveness was assessed by mother-report. Data were analyzed by descriptive statistics, two-way analysis of variances, Pearson's correlation and multiple regression analyses. The findings were as follows. First, mothers of boys showed more oversensitive-reaction to child negative emotions than mothers of girls. Mothers of 6th-graders showed more emotion-minimizin-greaction to child negative emotions than mothers of 5th-graders. Second, girls showed a higher level of overall emotional intelligence than boys. Girls showed a higher level of emotion expression and emotion regulation than boys. The 5th-graders showed higher level of emotion expression than 6th-graders; however, 6th graders showed a higher level of emotion perception than 5th-graders. Third, more emotion-coaching-reaction and less oversensitive-reaction by mothers predicted a better emotional intelligence of children. A mother's appropriate emotional socialization behaviors associated with child emotional intelligence were discussed.

Interactive Feature selection Algorithm for Emotion recognition (감정 인식을 위한 Interactive Feature Selection(IFS) 알고리즘)

  • Yang, Hyun-Chang;Kim, Ho-Duck;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.647-652
    • /
    • 2006
  • This paper presents the novel feature selection method for Emotion Recognition, which may include a lot of original features. Specially, the emotion recognition in this paper treated speech signal with emotion. The feature selection has some benefits on the pattern recognition performance and 'the curse of dimension'. Thus, We implemented a simulator called 'IFS' and those result was applied to a emotion recognition system(ERS), which was also implemented for this research. Our novel feature selection method was basically affected by Reinforcement Learning and since it needs responses from human user, it is called 'Interactive Feature Selection'. From performing the IFS, we could get 3 best features and applied to ERS. Comparing those results with randomly selected feature set, The 3 best features were better than the randomly selected feature set.

Statistical Speech Feature Selection for Emotion Recognition

  • Kwon Oh-Wook;Chan Kwokleung;Lee Te-Won
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.4E
    • /
    • pp.144-151
    • /
    • 2005
  • We evaluate the performance of emotion recognition via speech signals when a plain speaker talks to an entertainment robot. For each frame of a speech utterance, we extract the frame-based features: pitch, energy, formant, band energies, mel frequency cepstral coefficients (MFCCs), and velocity/acceleration of pitch and MFCCs. For discriminative classifiers, a fixed-length utterance-based feature vector is computed from the statistics of the frame-based features. Using a speaker-independent database, we evaluate the performance of two promising classifiers: support vector machine (SVM) and hidden Markov model (HMM). For angry/bored/happy/neutral/sad emotion classification, the SVM and HMM classifiers yield $42.3\%\;and\;40.8\%$ accuracy, respectively. We show that the accuracy is significant compared to the performance by foreign human listeners.

A research on Bayesian inference model of human emotion (베이지안 이론을 이용한 감성 추론 모델에 관한 연구)

  • Kim, Ji-Hye;Hwang, Min-Cheol;Kim, Jong-Hwa;U, Jin-Cheol;Kim, Chi-Jung;Kim, Yong-U
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.95-98
    • /
    • 2009
  • 본 연구는 주관 감성에 따른 생리 데이터의 패턴을 분류하고, 임의의 생리 데이터의 패턴을 확인하여 각성-이완, 쾌-불쾌의 감성을 추론하기 위해 베이지안 이론(Bayesian learning)을 기반으로 한 추론 모델을 제안하는 것이 목적이다. 본 연구에서 제안하는 모델은 학습데이터를 분류하여 사전확률을 도출하는 학습 단계와 사후확률로 임의의 생리 데이터의 패턴을 분류하여 감성을 추론하는 추론 단계로 이루어진다. 자율 신경계 생리변수(PPG, GSR, SKT) 각각의 패턴 분류를 위해 1~7로 정규화를 시킨 후 선형 관계를 구하여 분류된 패턴의 사전확률을 구하였다. 다음으로 임의의 사전 확률 분포에 대한 사후 확률 분포의 계산을 위해 베이지안 이론을 적용하였다. 본 연구를 통해 주관적 평가를 실시하지 않고 다중 생리변수 인식을 통해 감성을 추론 할 수 있는 모델을 제안하였다.

  • PDF

Emotion-Based Dynamic Crowd Simulation (인간의 감정에 기반한 동적 군중 시뮬레이션)

  • Moon Chan-Il;Han Sang-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.3
    • /
    • pp.87-93
    • /
    • 2004
  • In this paper we present a hybrid model that enables dynamic regrouping based on emotion in determining the behavioral pattern of crowds in order to enhance the reality of crowd simulation in virtual environments such as games. Emotion determination rules are defined and they are used for dynamic human regrouping to simulate the movement of characters through crowds realistically. Our experiments show more natural simulation of crowd behaviors as results of this research.

  • PDF

Emotion Recognition using Prosodic Feature Vector and Gaussian Mixture Model (운율 특성 벡터와 가우시안 혼합 모델을 이용한 감정인식)

  • Kwak, Hyun-Suk;Kim, Soo-Hyun;Kwak, Yoon-Keun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2002.11a
    • /
    • pp.375.2-375
    • /
    • 2002
  • This paper describes the emotion recognition algorithm using HMM(Hidden Markov Model) method. The relation between the mechanic system and the human has just been unilateral so far This is the why people don't want to get familiar with multi-service robots. If the function of the emotion recognition is granted to the robot system, the concept of the mechanic part will be changed a lot. (omitted)

  • PDF

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3E
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.