• 제목/요약/키워드: anger and sadness

검색결과 147건 처리시간 0.026초

Basic Emotions Elicited by Korean Affective Picture System Can be Differentiated by Autonomic Responses

  • Sohn, Jin-Hun;Estate Sokhadze;Lee, Kyug-Hwa;Imgap Yi
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.370-379
    • /
    • 2000
  • Autonomic responses were analyzed in 323 college students exposed to visual stimulation with Korean Affective Picture System (KAPS). Cardiac, vascular and electrodermal variables were recorded during 30 sec of viewing affective pictures. The same slides intended to elicit basic emotions (fear, anger, surprise, disgust, sadness, happiness) were presented to subjects in 2 trials with different experimental context. The first time slides were shown without any instructions (passive viewing), while during the second with instruction to exert efforts to magnify experienced emotion induced by pictures (active viewing). The aim of the study was to differentiate autonomic manifestations of emotions elicited by KAPS stimulation and to identify the role of instructed emotional engagement on physiological response profiles. The obtained results demonstrated reproducibility of responses in both trials with different contexts. Pairwise comparison of physiological responses in emotion conditions revealed the most pronounced differentiation for "ear-anger" and "fear-sadness" pairs (in electrodermal and HR variability parameters). "Fear-surprise" pair was also well differentiable. The typical response profile for all emotions included HR acceleration (except happiness and surprise), an increase of electrodermal activity, and a decrease of pulse volume. Higher cardiovascular and electrodermal reactivity to fear observed in this study, e.g., as compared to data with IAPS as stimuli, can be explained by cultural relevance and higher effectiveness of KAPS as stimuli, can be explained by cultural relevance and higher effectiveness of KAPS in producing certain emotions such as fear in Koreans.

  • PDF

로봇 감정의 강도를 표현하기 위한 LED 의 색과 깜빡임 제어 (Color and Blinking Control to Support Facial Expression of Robot for Emotional Intensity)

  • 김민규;이희승;박정우;조수훈;정명진
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2008년도 학술대회 1부
    • /
    • pp.547-552
    • /
    • 2008
  • 앞으로 로봇은 더욱 사람과 가까워 질 것이고, 따라서 사람과 로봇간의 상호작용도 활발해질 것이다. 이 때 직관적인 소통수단이 필수적이기 때문에, 로봇이 표정을 통해 감정을 표현할 수 있도록 하는 연구가 활발히 진행되어왔다. 기존에는 얼굴 표정을 주로 이용하였는데, 사람처럼 감정의 강도를 표현하기 위해서는 얼굴 외의 다른 방법도 필요하다. 로봇이 감정의 강도를 표현하기 위해서는 팔의 제스처, 움직임, 소리, 색깔 등을 이용할 수 있다. 본 논문에서는 LED 를 이용할 수 있도록 색과 깜빡임에 대해 연구하였다. 색깔과 감정의 관계에 대해서는 기존에 연구가 많이 되어 있지만, 실제로 로봇에 구현하기에는 정량적 자료가 부족하여 어려움이 있다. 본 논문에서는 6 가지 기본 감정(화남, 슬픔, 혐오, 놀람, 기쁨, 공포)을 효과적으로 나타낼 수 있는 색과 깜빡임 주기를 결정하고, 아바타를 이용하여 감정의 강도를 설문조사 하였다. 결과적으로, 슬픔, 혐오, 화남의 경우 색깔과 깜빡임을 통해 감정의 강도를 높일 수 있었다. 공포, 기쁨, 놀람의 경우 색깔과 깜빡임이 감정 인식에 큰 영향을 미치지 못했는데, 이는 그 감정에 해당하는 색깔이나 깜빡임을 수정해서 개선할 수 있을 것이다.

  • PDF

얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교 (Emotion Recognition of Korean and Japanese using Facial Images)

  • 이대종;안의숙;박장환;전명근
    • 한국지능시스템학회논문지
    • /
    • 제15권2호
    • /
    • pp.197-203
    • /
    • 2005
  • 본 논문에서는 얼굴영상을 이용하여 한국인과 일본인의 감정인식에 대하여 연구하였다. 얼굴의 감정인식을 위하여 심리학자인 Ekman과 Friesen의 연구에 의해 문화에 영향을 받지 않고 공통으로 인식하는 6개의 기본 감정인 기쁨, 슬픔, 화남, 놀람, 공포, 혐오를 바탕으로 실험하였다. 감정인식에서 입력영상은 이산 웨이블렛을 기반으로 한 다해상도 분석기법을 사용하여 데이터 수를 압축한 후, 각각의 영상에서 주성분분석기법 및 선형판별분석기법에 의해 얼굴의 감정특징을 추출하였다. 실험결과 한국인과 일본인 모두 "기쁨", "슬픔", "화남" 감정은 비교적 인식률이 높은 반면에 "놀람", "공포", "혐오" 감정은 인식률이 저조하게 나타냈다. 특히, 일본인의 경우 "혐오" 감정이 가장 저조한 결과를 나타냈으며, 전반적으로 한국인에 비해 일본인의 감정인식결과가 낮은 것으로 나타났다.

챗봇의 사회적 현존감을 위한 비언어적 감정 표현 방식 (Non-verbal Emotional Expressions for Social Presence of Chatbot Interface)

  • 강민정
    • 한국콘텐츠학회논문지
    • /
    • 제21권1호
    • /
    • pp.1-11
    • /
    • 2021
  • 챗봇과 친밀한 관계를 느끼고 대화에 몰입감을 높이기 위해 인간의 감정을 정확히 인지하고 그에 적합한 감정적 반응을 표현하는 인공지능 연구가 활발히 진행되고 있다. 따라서 본 연구에서는 챗봇이 감정을 표현할 때 사람같이 느끼게 하는 사회적 현존감을 높이는 비언어적 표현 방식에 대해서 밝히고자 한다. 본 연구는 우선 배경연구를 진행하여 표정이 가장 감정을 잘 드러내는 비언어적 표현이며 움직임은 관계몰입에 중요하다는 것을 파악하였다. 이를 바탕으로 감정에 따라 사회적 현존감이 느껴지는 표현 방식을 밝히기 위해 5가지 기본 감정인, 기쁨, 슬픔, 놀람, 두려움, 화남을 동적 텍스트, 동적 제스처, 정적 표정 이모티콘으로 자극물을 준비하여 설문조사를 통해 가장 사회적 현존감이 느껴지는 표현 방식을 각 감정별로 택하도록 하였다. 설문 결과 기쁨과 같은 긍정적이고 각성 상태가 높은 감정에서는 동적인 표현이, 슬픔과 화남과 같은 부정적인 감정에서는 정적 표정 이모티콘이, 놀람, 두려움과 같은 중립적 감정의 경우 의미를 확실히 알 수 있는 동적 텍스트가 주로 선택되었다. 본 연구 결과는 챗봇 개발 시 감정을 표현하는 방식을 정할 때 중요한 참고자료가 될 것으로 기대한다.

아동의 정서 표현성과 사교성, 어머니의 의사소통 유형이 아동의 친사회적 행동에 미치는 영향 (The Influence of Children's Emotional Expression and Sociability, and Their Mothers' Communication Pattern on Their Prosocial Behavior)

  • 송하나;최경숙
    • 대한가정학회지
    • /
    • 제47권6호
    • /
    • pp.1-10
    • /
    • 2009
  • This study investigated the influence of children's emotional expression and sociability, and their mothers' communication pattern on their prosocial behavior. The participants were 65 preschool children aged between 5 and 6, and their mothers. Each child-mother dyad was observed for 30 minutes in a lab setting, which was designed to evaluate the child's socioemotional competence and the mother's socialization behavior. Videotaped data were analyzed by two coders for aspects of sharing behavior, the expression of happiness, sadness, anger, anxiety, and sociability for children, and mothers' communication strategies. Results showed that children's anger and anxiety expression were the most significant predictors for their prosocial behavior. Mothers' punitive communication pattern negatively affected children's prosocial behavior. However, when compared to the children's emotional expression, its' accountability were not significant. The influence of negative emotions, and its' adverse role in interpersonal interactions are discussed.

RECOGNIZING SIX EMOTIONAL STATES USING SPEECH SIGNALS

  • Kang, Bong-Seok;Han, Chul-Hee;Youn, Dae-Hee;Lee, Chungyong
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.366-369
    • /
    • 2000
  • This paper examines three algorithms to recognize speaker's emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB(Maximum-Likeligood Bayes), NN(Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. We recorded a corpus of emotional speech data and performed the subjective evaluation for the data. The subjective recognition result was 56% and was compared with the classifiers' recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3% and 89.1% respectively, for the speaker dependent, and context-independent classification.

  • PDF

만성질환과 발달장애 아동의 부모-자녀관계와 관계증진을 위한 전략 (The Strategy for Improvement of the Relationship between Parent and Child with Chronic Illness and Developmental Disability)

  • 조결자
    • 동서간호학연구지
    • /
    • 제7권1호
    • /
    • pp.94-104
    • /
    • 2002
  • The purposes of this study were to identify the relationship between parent and child with chronic illness and developmental disability, and to review the strategy for improvement of their relationship. The effects of chronic illness and developmental disability is that the child has delayed growth and development, and his(her) parent has sadness, guilty feeling, anxiety, grief, disappointment, low self-esteem, anger and resentment. Chronic illness and developmental disability also have a negative effect to the parental marital relationship. The reaction of parent and child varies in age of onset, a developmental transition period, crisis and parent-child relationship. Through this study, I propose that parent-child relationship could be improved by touch, communication between them and education for parents.

  • PDF

분노 및 슬픔 상황에서 아동의 정서조절 동기와 정서조절 전략 (Children's Motives and Strategies for Emotional Regulation in Angry and sad Situations)

  • 이지선;유안진
    • 아동학회지
    • /
    • 제20권3호
    • /
    • pp.123-137
    • /
    • 1999
  • This study investigated the influence of audience type (mother or close friend) age, and gender on children's goals and strategies for emotional regulation in angry and in sad situations. Hypothetical vignette methodology was used with 314 children in grades 5 and 7. In angry situations, all boys and all 5th grade children regulated anger more with instrumental motives while 7th grade girls showed more prosocial motives. Children showed more prosocial and rule oriented motives with peers and relational motives with mothers. In angry situations, children used aggression regulation strategies more toward peers and activity regulation strategies more toward mothers. Children's age and sex explained sadness regulation motives better than audience type with peers, but children used more activity regulation strategies with mothers in sad situations. When sad, fifth graders used more verbal and facial expression strategies than 7th graders while boys used more activity regulation strategies than girls.

  • PDF

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.