• Title/Summary/Keyword: anger and sadness

Search Result 147, Processing Time 0.029 seconds

Basic Emotions Elicited by Korean Affective Picture System Can be Differentiated by Autonomic Responses

  • Sohn, Jin-Hun;Estate Sokhadze;Lee, Kyug-Hwa;Imgap Yi
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.370-379
    • /
    • 2000
  • Autonomic responses were analyzed in 323 college students exposed to visual stimulation with Korean Affective Picture System (KAPS). Cardiac, vascular and electrodermal variables were recorded during 30 sec of viewing affective pictures. The same slides intended to elicit basic emotions (fear, anger, surprise, disgust, sadness, happiness) were presented to subjects in 2 trials with different experimental context. The first time slides were shown without any instructions (passive viewing), while during the second with instruction to exert efforts to magnify experienced emotion induced by pictures (active viewing). The aim of the study was to differentiate autonomic manifestations of emotions elicited by KAPS stimulation and to identify the role of instructed emotional engagement on physiological response profiles. The obtained results demonstrated reproducibility of responses in both trials with different contexts. Pairwise comparison of physiological responses in emotion conditions revealed the most pronounced differentiation for "ear-anger" and "fear-sadness" pairs (in electrodermal and HR variability parameters). "Fear-surprise" pair was also well differentiable. The typical response profile for all emotions included HR acceleration (except happiness and surprise), an increase of electrodermal activity, and a decrease of pulse volume. Higher cardiovascular and electrodermal reactivity to fear observed in this study, e.g., as compared to data with IAPS as stimuli, can be explained by cultural relevance and higher effectiveness of KAPS as stimuli, can be explained by cultural relevance and higher effectiveness of KAPS in producing certain emotions such as fear in Koreans.

  • PDF

Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity (로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어)

  • Kim, Min-Gyu;Lee, Hui-Sung;Park, Jeong-Woo;Jo, Su-Hun;Chung, Myung-Jin
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.547-552
    • /
    • 2008
  • Human and robot will have closer relation in the future, and we can expect that the interaction between human and robot will be more intense. To take the advantage of people's innate ability of communication, researchers concentrated on the facial expression so far. But for the robot to express emotional intensity, other modalities such as gesture, movement, sound, color are also needed. This paper suggests that the intensity of emotion can be expressed with color and blinking so that it is possible to apply the result to LED. Color and emotion definitely have relation, however, the previous results are difficult to implement due to the lack of quantitative data. In this paper, we determined color and blinking period to express the 6 basic emotions (anger, sadness, disgust, surprise, happiness, fear). It is implemented on avatar and the intensities of emotions are evaluated through survey. We figured out that the color and blinking helped to express the intensity of emotion for sadness, disgust, anger. For fear, happiness, surprise, the color and blinking didn't play an important role; however, we may improve them by adjusting the color or blinking.

  • PDF

Emotion Recognition of Korean and Japanese using Facial Images (얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교)

  • Lee, Dae-Jong;Ahn, Ui-Sook;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.197-203
    • /
    • 2005
  • In this paper, we propose an emotion recognition using facial Images to effectively design human interface. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition for the facial images is performed after applying the discrete wavelet. Here, the feature vectors are extracted from the PCA and LDA. Experimental results show that human emotions such as happiness, sadness, and anger has better performance than surprise, fear and dislike. Expecially, Japanese shows lower performance for the dislike emotion. Generally, the recognition rates for Korean have higher values than Japanese cases.

Non-verbal Emotional Expressions for Social Presence of Chatbot Interface (챗봇의 사회적 현존감을 위한 비언어적 감정 표현 방식)

  • Kang, Minjeong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • The users of a chatbot messenger can be better engaged in the conversation if they feel intimacy with the chatbot. This can be achieved by the chatbot's effective expressions of human emotions to chatbot users. Thus motivated, this study aims to identify the appropriate emotional expressions of a chatbot that make people feel the social presence of the chatbot. In the background research, we obtained that facial expression is the most effective way of emotions and movement is important for relationship emersion. In a survey, we prepared moving text, moving gestures, and still emoticon that represent five emotions such as happiness, sadness, surprise, fear, and anger. Then, we asked the best way for them to feel social presence with a chatbot in each emotion. We found that, for an arousal and pleasant emotion such as 'happiness', people prefer moving gesture and text most while for unpleasant emotions such as 'sadness' and 'anger', people prefer emoticons. Lastly, for the neutral emotions such as 'surprise' and 'fear', people tend to select moving text that delivers clear meaning. We expect that this results of the study are useful for developing emotional chatbots that enable more effective conversations with users.

The Influence of Children's Emotional Expression and Sociability, and Their Mothers' Communication Pattern on Their Prosocial Behavior (아동의 정서 표현성과 사교성, 어머니의 의사소통 유형이 아동의 친사회적 행동에 미치는 영향)

  • Song, Ha-Na;Choi, Kyoung-Sook
    • Journal of the Korean Home Economics Association
    • /
    • v.47 no.6
    • /
    • pp.1-10
    • /
    • 2009
  • This study investigated the influence of children's emotional expression and sociability, and their mothers' communication pattern on their prosocial behavior. The participants were 65 preschool children aged between 5 and 6, and their mothers. Each child-mother dyad was observed for 30 minutes in a lab setting, which was designed to evaluate the child's socioemotional competence and the mother's socialization behavior. Videotaped data were analyzed by two coders for aspects of sharing behavior, the expression of happiness, sadness, anger, anxiety, and sociability for children, and mothers' communication strategies. Results showed that children's anger and anxiety expression were the most significant predictors for their prosocial behavior. Mothers' punitive communication pattern negatively affected children's prosocial behavior. However, when compared to the children's emotional expression, its' accountability were not significant. The influence of negative emotions, and its' adverse role in interpersonal interactions are discussed.

RECOGNIZING SIX EMOTIONAL STATES USING SPEECH SIGNALS

  • Kang, Bong-Seok;Han, Chul-Hee;Youn, Dae-Hee;Lee, Chungyong
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.366-369
    • /
    • 2000
  • This paper examines three algorithms to recognize speaker's emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB(Maximum-Likeligood Bayes), NN(Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. We recorded a corpus of emotional speech data and performed the subjective evaluation for the data. The subjective recognition result was 56% and was compared with the classifiers' recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3% and 89.1% respectively, for the speaker dependent, and context-independent classification.

  • PDF

The Strategy for Improvement of the Relationship between Parent and Child with Chronic Illness and Developmental Disability (만성질환과 발달장애 아동의 부모-자녀관계와 관계증진을 위한 전략)

  • Cho, Kyoul-Ja
    • Journal of East-West Nursing Research
    • /
    • v.7 no.1
    • /
    • pp.94-104
    • /
    • 2002
  • The purposes of this study were to identify the relationship between parent and child with chronic illness and developmental disability, and to review the strategy for improvement of their relationship. The effects of chronic illness and developmental disability is that the child has delayed growth and development, and his(her) parent has sadness, guilty feeling, anxiety, grief, disappointment, low self-esteem, anger and resentment. Chronic illness and developmental disability also have a negative effect to the parental marital relationship. The reaction of parent and child varies in age of onset, a developmental transition period, crisis and parent-child relationship. Through this study, I propose that parent-child relationship could be improved by touch, communication between them and education for parents.

  • PDF

Children's Motives and Strategies for Emotional Regulation in Angry and sad Situations (분노 및 슬픔 상황에서 아동의 정서조절 동기와 정서조절 전략)

  • Lee, Ji Sun;Yoo, An Jin
    • Korean Journal of Child Studies
    • /
    • v.20 no.3
    • /
    • pp.123-137
    • /
    • 1999
  • This study investigated the influence of audience type (mother or close friend) age, and gender on children's goals and strategies for emotional regulation in angry and in sad situations. Hypothetical vignette methodology was used with 314 children in grades 5 and 7. In angry situations, all boys and all 5th grade children regulated anger more with instrumental motives while 7th grade girls showed more prosocial motives. Children showed more prosocial and rule oriented motives with peers and relational motives with mothers. In angry situations, children used aggression regulation strategies more toward peers and activity regulation strategies more toward mothers. Children's age and sex explained sadness regulation motives better than audience type with peers, but children used more activity regulation strategies with mothers in sad situations. When sad, fifth graders used more verbal and facial expression strategies than 7th graders while boys used more activity regulation strategies than girls.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2E
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.