• 제목/요약/키워드: emotion of fear and anger

검색결과 86건 처리시간 0.026초

유아의 상상놀이에서 부정적 정서 표현에 대한 연구 (The Expression of Negative Emotions During Children's Pretend Play)

  • 신유림
    • 아동학회지
    • /
    • 제21권3호
    • /
    • pp.133-142
    • /
    • 2000
  • This study investigated the extent to which negative emotions were portrayed, the ways in which children communicated about negative emotions, and to whom negative emotions were attributed during pretend play. The themes in which negative emotions were embedded were examined. Thirty 4- and 5-year-olds, each paired with a self-chosen peer, were observed and videotaped during a 20-minute play session. Observations presented the following conclusions: Anger and fear were the most frequently occurring negative emotions. Children communicated about negative feelings through emotion action labels and gesture. Children attributed a large proportion of their emotional portrayals to themselves and to play objects. Expression of affective themes embedded in pretend play included anger, fear, sadness, and pain.

  • PDF

얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교 (Emotion Recognition of Korean and Japanese using Facial Images)

  • 이대종;안의숙;박장환;전명근
    • 한국지능시스템학회논문지
    • /
    • 제15권2호
    • /
    • pp.197-203
    • /
    • 2005
  • 본 논문에서는 얼굴영상을 이용하여 한국인과 일본인의 감정인식에 대하여 연구하였다. 얼굴의 감정인식을 위하여 심리학자인 Ekman과 Friesen의 연구에 의해 문화에 영향을 받지 않고 공통으로 인식하는 6개의 기본 감정인 기쁨, 슬픔, 화남, 놀람, 공포, 혐오를 바탕으로 실험하였다. 감정인식에서 입력영상은 이산 웨이블렛을 기반으로 한 다해상도 분석기법을 사용하여 데이터 수를 압축한 후, 각각의 영상에서 주성분분석기법 및 선형판별분석기법에 의해 얼굴의 감정특징을 추출하였다. 실험결과 한국인과 일본인 모두 "기쁨", "슬픔", "화남" 감정은 비교적 인식률이 높은 반면에 "놀람", "공포", "혐오" 감정은 인식률이 저조하게 나타냈다. 특히, 일본인의 경우 "혐오" 감정이 가장 저조한 결과를 나타냈으며, 전반적으로 한국인에 비해 일본인의 감정인식결과가 낮은 것으로 나타났다.

한국인 표준 얼굴 표정 이미지의 감성 인식 정확률 (The Accuracy of Recognizing Emotion From Korean Standard Facial Expression)

  • 이우리;황민철
    • 한국콘텐츠학회논문지
    • /
    • 제14권9호
    • /
    • pp.476-483
    • /
    • 2014
  • 본 논문은 국내 표정 연구에 적합한 얼굴 표정 이미지를 제작하는 것에 목적을 두었다. 이를 위해서 1980년대 태생의 한국인의 표준 형상에 FACS-Action Unit을 결합하여, KSFI(Korean Standard Facial Image) AU set를 제작하였다. KSFI의 객관성을 확보하기 위해 6가지 기본 감성(슬픔, 행복, 혐오, 공포, 화남, 놀람) 이미지를 제작하여, 감성 별 인식 정확률과 얼굴 요소의 감성인식 기여도를 평가하였다. 실험 결과, 정확률이 높은 행복, 놀람, 슬픔, 분노의 이미지의 경우 주로 눈과 입의 얼굴 요소를 통해 감성을 판단하였다. 이러한 연구 결과를 통해 본 연구에서는 표정 이미지의 AU 변경할 수 있는 KSFI 콘텐츠를 제안하였다. 향후 KSFI가 감성 인식률 향상에 기여할 수 있는 학습 콘텐츠로서의 역할을 할 수 있을 것으로 사료된다.

Use of Emotion Words by Korean English Learners

  • Lee, Jin-Kyong
    • 영어어문교육
    • /
    • 제17권4호
    • /
    • pp.193-206
    • /
    • 2011
  • The purpose of the study is to examine the use of emotion vocabulary by Korean English learners. Three basic emotion fields, pleasure, anger, and fear were selected to elicit the participants' responses. L1 English speakers' data was also collected for comparison. The major results are as follows. First, English learners responded with various inappropriate verb forms like I feel~, I am~ while the majority of English native speaking teachers responded with subjunctive forms like I would feel~. In addition, L2 English learners used mostly simple and coordination sentences. Second, the lexical richness, measured through type/token ratio, was higher in English L1 data than in English L2 data. The proportion of emotion lemmas reflects the lexical richness or the diversity of the emotion words. Lastly, L2 English learners' responses focused on a few typical adjectives like happy, angry and scared. This structural and semantic distinctiveness of Korean English learners' emotion words was discussed from pedagogical perspectives.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • 제19권3호
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

Extracting and Clustering of Story Events from a Story Corpus

  • Yu, Hye-Yeon;Cheong, Yun-Gyung;Bae, Byung-Chull
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3498-3512
    • /
    • 2021
  • This article describes how events that make up text stories can be represented and extracted. We also address the results from our simple experiment on extracting and clustering events in terms of emotions, under the assumption that different emotional events can be associated with the classified clusters. Each emotion cluster is based on Plutchik's eight basic emotion model, and the attributes of the NLTK-VADER are used for the classification criterion. While comparisons of the results with human raters show less accuracy for certain emotion types, emotion types such as joy and sadness show relatively high accuracy. The evaluation results with NRC Word Emotion Association Lexicon (aka EmoLex) show high accuracy values (more than 90% accuracy in anger, disgust, fear, and surprise), though precision and recall values are relatively low.

인간과 감정적 상호작용을 위한 '감정 엔진' (Engine of computational Emotion model for emotional interaction with human)

  • 이연곤
    • 감성과학
    • /
    • 제15권4호
    • /
    • pp.503-516
    • /
    • 2012
  • 지금까지 로봇 및 소프트웨어 에이전트들을 살펴보면, 감정 모델이 내부에 종속적으로 존재하기 때문에 감정모델만을 별도로 분리해 새로운 시스템에 재활용하기란 쉽지 않다. 따라서 어떤 로봇 및 에이전트와 연동될 수 있는 Engine of computational Emotion model (이하 EE로 표시한다)을 소개한다. 이 EE는 어떤 입력 정보에도 치중되지 않고, 어떤 로봇 및 에이전트의 내부와도 연동되도록 독립적으로 감정을 담당하기 위해, 입력 단계인 인식과 출력 단계인 표현을 배제하고, 순수하게 감정의 생성 및 처리를 담당하는 중간 단계인 감정 발생만을 분리하여, '입력단 및 출력단과 독립적인 소프트웨어 형태, 즉 엔진(Engine)'으로 존재한다. 이 EE는 어떤 입력단 및 출력단과 상호작용이 가능하며, 자체 감정뿐 아니라 상대방의 감정을 사용하며, 성격을 활용하여 종합적인 감정을 산출해낸다. 또한 이 EE는 로봇 및 에이전트의 내부에 라이브러리 형태로 존재하거나, 별도의 시스템으로 존재하여 통신할 수 있는 구조로 활용될 수 있다. 감정은 Joy(기쁨), Surprise(놀람), Disgust(혐오), Fear(공포), Sadness(슬픔), Anger(분노)의 기본 감정을 사용하며, 문자열과 계수를 쌍으로 갖는 정보를 EE는 입력 인터페이스를 통해 입력 신호로 받고, 출력 인터페이스를 통해 출력 신호로 내보낸다. EE는 내부에 감정마다 감정경험의 연결 목록을 가지고 있으며, 이의 계수의 쌍으로 구성된 정보를 감정의 생성 및 처리하기 위한 감정상태 목록으로 사용한다. 이 감정경험 목록은 '인간이 실생활에서 경험하는 다양한 감정에 대한 이해를 도모'하는 감정표현어휘로 구성되어 있다. EE는 인간의 감정을 탐색하여 적절한 반응을 나타내주는 상호작용 제품에 이용 가능할 것이다. 본 연구는 제품이 '인간을 공감하고 있음'을 인간이 느낄 수 있도록 유도하는 시스템을 만들고자 함이므로, HRI(인간-로봇 상호작용)나 HCI(인간-컴퓨터 상호작용)와 관련 제품이 효율적인 감정적 공감 서비스를 제공하는데 도움이 될 수 있을 것으로 기대한다.

  • PDF

Autonomic and Frontal Electrocortical Responses That Differentiate Emotions elicited by the Affective Visual Stimulation

  • Sohn, Jin-Hun;Lee, Kyung-Hwa;Park, Mi-Kyung;Eunhey Jang;Estate Sokhadze
    • 한국감성과학회:학술대회논문집
    • /
    • 한국감성과학회 2000년도 춘계 학술대회 및 국제 감성공학 심포지움 논문집 Proceeding of the 2000 Spring Conference of KOSES and International Sensibility Ergonomics Symposium
    • /
    • pp.15-25
    • /
    • 2000
  • Cardiac, respiratory, electrodermal and frontal (F3, F4) EEG responses were analyzed and compared during to slides of International Affective Picture System (IAPS) in the study on 42 students. Physiological responses during 20s of exposure to slides intended to elicit happiness (nurturant and erotic), sadness, disgust, surprise, fear or anger emotions were quite similar and were expressed in heart rate (HR) deceleration, decreased HR variability (HRV), specific SCR, increased non-specific SCR frequency (N-SCR), and EEG changes exhibited in theta increase, alpha-blocking and increased beta activity, and frontal asymmetry. However, some emotions demonstrated variations of the response magnitudes, enabling to differentiate some paris of emotions by several physiological parameters. The profiles showed higher magnitudes of HRV and EEG responses in exciting (i.e., erotic) and higher cardiac and respiratory responses in surprise. The most different pairs were exciting-surprise (by HR, HRV, theta, and alpha asymmetry), exciting-sadness (by theta, alpha, and alpha asymmetry), and exciting-fear (by HRV, theta, F3 alpha, and alpha asymmetry). Nurturant happiness yielded the least differentiation. Differences were found as well within negative emotions, e.g., anger-sadness were differentiated by HRV and theta asymmetry, while disgust-fear by N-SCR and beta asymmetry. Obtained results suggest that magnitudes of profiles of physiological variables differentiate emotions evoked by affective pictures, despite that the patterns of most responses were featured by qualitative similarity in given passive viewing context.

  • PDF

Emotional effect of the Covid-19 pandemic on oral surgery procedures: a social media analysis

  • Altan, Ahmet
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • 제21권3호
    • /
    • pp.237-244
    • /
    • 2021
  • Background: This study aimed to analyze Twitter users' emotional tendencies regarding oral surgery procedures before and after the coronavirus disease 2019 (COVID-19) pandemic worldwide. Methods: Tweets posted in English before and after the COVID-19 pandemic were included in the study. Popular tweets in 2019 were searched using the keywords "tooth removal", "tooth extraction", "dental pain", "wisdom tooth", "wisdom teeth", "oral surgery", "oral surgeon", and "OMFS". In 2020, another search was conducted by adding the words "COVID" and "corona" to the abovementioned keywords. Emotions underlying the tweets were analyzed using CrystalFeel - Multidimensional Emotion Analysis. In this analysis, we focused on four emotions: fear, anger, sadness, and joy. Results: A total of 1240 tweets, which were posted before and after the COVID-19 pandemic, were analyzed. There was a statistically significant difference between the emotions' distribution before and after the pandemic (p < 0.001). While the sense of joy decreased after the pandemic, anger and fear increased. There was a statistically significant difference between the emotional valence distributions before and after the pandemic (p < 0.001). While a negative emotion intensity was noted in 52.9% of the messages before the pandemic, it was observed in 74.3% of the messages after the pandemic. A positive emotional intensity was observed in 29.8% of the messages before the pandemic, but was seen in 10.7% of the messages after the pandemic. Conclusion: Infectious diseases, such as COVID-19, may lead to mental, emotional, and behavioral changes in people. Unpredictability, uncertainty, disease severity, misinformation, and social isolation may further increase dental anxiety and fear among people.