• Title/Summary/Keyword: Emotional recognition

Search Result 509, Processing Time 0.02 seconds

The Relationship between Children's Gender, Age, Temperament, Mothers' Emotionality, and Emotional Development (유아의 성, 연령, 기질 및 어머니의 정서성과 유아의 정서 발달의 관계)

  • An, Ra-Ri;Kim, Hee-Jin
    • Journal of the Korean Home Economics Association
    • /
    • v.45 no.2
    • /
    • pp.133-145
    • /
    • 2007
  • The purpose of this research was to identify the importance of emotional development in early childhood, in children ages three to five, by examining the relationship between the variables in the children such as gender, age, and temperament, as well as their mothers' emotionality, in relation to emotional development. The participants included a total of 72 children between three and five years of age. The major findings are as follow: First, there were significant differences in emotional expression and emotional recognition between the boys and the girls. Additionally, the emotional recognition of the children increased as age increased, and more positive strategies for emotional regulation were used with the increasing age of the children. Temperament characteristics did not have any relationship with emotional expression or emotional recognition, while the strategies for emotional regulation were related to the temperament characteristics. Second, the emotional expressivity of the mother was related to the emotional expression and recognition of the child, but wes not associated with strategies for emotional regulation. The emotional reactivity of the mother was related to a child's strategies for emotional regulation, but not to emotional expression or recognition. Third, emotional development of the children wes influenced by the individual child variables and emotionality of the mother.

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3E
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.

The Relationship between Neurocognitive Functioning and Emotional Recognition in Chronic Schizophrenic Patients (만성 정신분열병 환자들의 인지 기능과 정서 인식 능력의 관련성)

  • Hwang, Hye-Li;Hwang, Tae-Yeon;Lee, Woo-Kyung;Han, Eun-Sun
    • Korean Journal of Biological Psychiatry
    • /
    • v.11 no.2
    • /
    • pp.155-164
    • /
    • 2004
  • Objective:The present study examined the association between basic neurocognitive functions and emotional recognition in chronic schizophrenia. Furthermore, to Investigate cognitive variable related to emotion recognition in Schizophrenia. Methods:Forty eight patients from the Yongin Psychiatric Rehabilitation Center were evaluated for neurocognitive function, and Emotional Recognition Test which has four subscales finding emotional clue, discriminating emotions, understanding emotional context and emotional capacity. Measures of neurocognitive functioning were selected based on hypothesized relationships to perception of emotion. These measures included:1) Letter Number Sequencing Test, a measure of working memory;2) Word Fluency and Block Design, a measure of executive function;3) Hopkins Verbal Learning Test-Korean version, a measure of verbal memory;4) Digit Span, a measure of immediate memory;5) Span of Apprehension Task, a measure of early visual processing, visual scanning;6) Continuous Performance Test, a measure of sustained attention functioning. Correlation analyses between specific neurocognitive measures and emotional recognition test were made. To examine the degree to which neurocognitive performance predicting emotional recognition, hierarchical regression analyses were also made. Results:Working memory, and verbal memory were closely related with emotional discrimination. Working memory, Span of Apprehension and Digit Span were closely related with contextual recognition. Among cognitive measures, Span of Apprehension, Working memory, Digit Span were most important variables in predicting emotional capacity. Conclusion:These results are relevant considering that emotional information processing depends, in part, on the abilities to scan the context and to use immediate working memory. These results indicated that mul- tifaceted cognitive training program added with Emotional Recognition Task(Cognitive Behavioral Rehabilitation Therapy added with Emotional Management Program) are promising.

  • PDF

Emotional Speaker Recognition using Emotional Adaptation (감정 적응을 이용한 감정 화자 인식)

  • Kim, Weon-Goo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1105-1110
    • /
    • 2017
  • Speech with various emotions degrades the performance of the speaker recognition system. In this paper, a speaker recognition method using emotional adaptation has been proposed to improve the performance of speaker recognition system using affective speech. For emotional adaptation, emotional speaker model was generated from speaker model without emotion using a small number of training affective speech and speaker adaptation method. Since it is not easy to obtain a sufficient affective speech for training from a speaker, it is very practical to use a small number of affective speeches in a real situation. The proposed method was evaluated using a Korean database containing four emotions. Experimental results show that the proposed method has better performance than conventional methods in speaker verification and speaker recognition.

Alexithymia and the Recognition of Facial Emotion in Schizophrenic Patients (정신분열병 환자에서의 감정표현불능증과 얼굴정서인식결핍)

  • Noh, Jin-Chan;Park, Sung-Hyouk;Kim, Kyung-Hee;Kim, So-Yul;Shin, Sung-Woong;Lee, Koun-Seok
    • Korean Journal of Biological Psychiatry
    • /
    • v.18 no.4
    • /
    • pp.239-244
    • /
    • 2011
  • Objectives Schizophrenic patients have been shown to be impaired in both emotional self-awareness and recognition of others' facial emotions. Alexithymia refers to the deficits in emotional self-awareness. The relationship between alexithymia and recognition of others' facial emotions needs to be explored to better understand the characteristics of emotional deficits in schizophrenic patients. Methods Thirty control subjects and 31 schizophrenic patients completed the Toronto Alexithymia Scale-20-Korean version (TAS-20K) and facial emotion recognition task. The stimuli in facial emotion recognition task consist of 6 emotions (happiness, sadness, anger, fear, disgust, and neutral). Recognition accuracy was calculated within each emotion category. Correlations between TAS-20K and recognition accuracy were analyzed. Results The schizophrenic patients showed higher TAS-20K scores and lower recognition accuracy compared with the control subjects. The schizophrenic patients did not demonstrate any significant correlations between TAS-20K and recognition accuracy, unlike the control subjects. Conclusions The data suggest that, although schizophrenia may impair both emotional self-awareness and recognition of others' facial emotions, the degrees of deficit can be different between emotional self-awareness and recognition of others' facial emotions. This indicates that the emotional deficits in schizophrenia may assume more complex features.

An Emotional Communication System Using Emotion Recognition of Users (사용자의 감성인식을 통한 감성통신 시스템)

  • Cho, Myeon-gyun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.6 no.4
    • /
    • pp.201-207
    • /
    • 2011
  • This paper introduces a novel concept of 'Emotional Communication' for future smart phone. While traditional information based communication technologies focus on how to precisely transmit the content of message, emotional communication is intended to support and augment social relationship among people and to comfort the user to be happy. In this paper, we propose future communication services and core technologies which can estimate emotional desire of users and respond to the desire to be happy with connectedness and consolation from peoples. Firstly, we introduce emotion recognition techniques to estimate emotional desire of users. At second, the emotional responding services are categorized to four parts and the details are shown. Lastly we propose the process to implement emotional communication system and the main techniques to fulfill the system requirements for future smart-phone services.

Development of Bio-sensor-Based Feature Extraction and Emotion Recognition Model (바이오센서 기반 특징 추출 기법 및 감정 인식 모델 개발)

  • Cho, Ye Ri;Pae, Dong Sung;Lee, Yun Kyu;Ahn, Woo Jin;Lim, Myo Taeg;Kang, Tae Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1496-1505
    • /
    • 2018
  • The technology of emotion recognition is necessary for human computer interaction communication. There are many cases where one cannot communicate without considering one's emotion. As such, emotional recognition technology is an essential element in the field of communication. n this regard, it is highly utilized in various fields. Various bio-sensor sensors are used for human emotional recognition and can be used to measure emotions. This paper proposes a system for recognizing human emotions using two physiological sensors. For emotional classification, two-dimensional Russell's emotional model was used, and a method of classification based on personality was proposed by extracting sensor-specific characteristics. In addition, the emotional model was divided into four emotions using the Support Vector Machine classification algorithm. Finally, the proposed emotional recognition system was evaluated through a practical experiment.

Emotional Head Robot System Using 3D Character (3D 캐릭터를 이용한 감정 기반 헤드 로봇 시스템)

  • Ahn, Ho-Seok;Choi, Jung-Hwan;Baek, Young-Min;Shamyl, Shamyl;Na, Jin-Hee;Kang, Woo-Sung;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.328-330
    • /
    • 2007
  • Emotion is getting one of the important elements of the intelligent service robots. Emotional communication can make more comfortable relation between humans and robots. We developed emotional head robot system using 3D character. We designed emotional engine for making emotion of the robot. The results of face recognition and hand recognition is used for the input data of emotional engine. 3D character expresses nine emotions and speaks about own emotional status. The head robot has memory of a degree of attraction. It can be chaIU!ed by input data. We tested the head robot and conform its functions.

  • PDF

A Survey on Image Emotion Recognition

  • Zhao, Guangzhe;Yang, Hanting;Tu, Bing;Zhang, Lei
    • Journal of Information Processing Systems
    • /
    • v.17 no.6
    • /
    • pp.1138-1156
    • /
    • 2021
  • Emotional semantics are the highest level of semantics that can be extracted from an image. Constructing a system that can automatically recognize the emotional semantics from images will be significant for marketing, smart healthcare, and deep human-computer interaction. To understand the direction of image emotion recognition as well as the general research methods, we summarize the current development trends and shed light on potential future research. The primary contributions of this paper are as follows. We investigate the color, texture, shape and contour features used for emotional semantics extraction. We establish two models that map images into emotional space and introduce in detail the various processes in the image emotional semantic recognition framework. We also discuss important datasets and useful applications in the field such as garment image and image retrieval. We conclude with a brief discussion about future research trends.

Robust Speech Recognition using Vocal Tract Normalization for Emotional Variation (성도 정규화를 이용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo;Bang, Hyun-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.773-778
    • /
    • 2009
  • This paper studied the training methods less affected by the emotional variation for the development of the robust speech recognition system. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions. The performance of the speech recognition system trained by using the speech signal containing no emotion is deteriorated if the test speech signal contains the emotions because of the emotional difference between the test and training data. In this study, it is observed that vocal tract length of the speaker is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, vocal tract normalization method is used to develop the robust speech recognition system for emotional variations. Experimental results from the isolated word recognition using HMM showed that the vocal tract normalization method reduced the error rate of the conventional recognition system by 41.9% when emotional test data was used.