• Title/Summary/Keyword: emotion of fear and anger

Search Result 86, Processing Time 0.025 seconds

The Expression of Negative Emotions During Children's Pretend Play (유아의 상상놀이에서 부정적 정서 표현에 대한 연구)

  • Shin, Yoolim
    • Korean Journal of Child Studies
    • /
    • v.21 no.3
    • /
    • pp.133-142
    • /
    • 2000
  • This study investigated the extent to which negative emotions were portrayed, the ways in which children communicated about negative emotions, and to whom negative emotions were attributed during pretend play. The themes in which negative emotions were embedded were examined. Thirty 4- and 5-year-olds, each paired with a self-chosen peer, were observed and videotaped during a 20-minute play session. Observations presented the following conclusions: Anger and fear were the most frequently occurring negative emotions. Children communicated about negative feelings through emotion action labels and gesture. Children attributed a large proportion of their emotional portrayals to themselves and to play objects. Expression of affective themes embedded in pretend play included anger, fear, sadness, and pain.

  • PDF

Emotion Recognition of Korean and Japanese using Facial Images (얼굴영상을 이용한 한국인과 일본인의 감정 인식 비교)

  • Lee, Dae-Jong;Ahn, Ui-Sook;Park, Jang-Hwan;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.197-203
    • /
    • 2005
  • In this paper, we propose an emotion recognition using facial Images to effectively design human interface. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition for the facial images is performed after applying the discrete wavelet. Here, the feature vectors are extracted from the PCA and LDA. Experimental results show that human emotions such as happiness, sadness, and anger has better performance than surprise, fear and dislike. Expecially, Japanese shows lower performance for the dislike emotion. Generally, the recognition rates for Korean have higher values than Japanese cases.

The Accuracy of Recognizing Emotion From Korean Standard Facial Expression (한국인 표준 얼굴 표정 이미지의 감성 인식 정확률)

  • Lee, Woo-Ri;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.9
    • /
    • pp.476-483
    • /
    • 2014
  • The purpose of this study was to make a suitable images for korean emotional expressions. KSFI(Korean Standard Facial Image)-AUs was produced from korean standard apperance and FACS(Facial Action coding system)-AUs. For the objectivity of KSFI, the survey was examined about emotion recognition rate and contribution of emotion recognition in facial elements from six-basic emotional expression images(sadness, happiness, disgust, fear, anger and surprise). As a result of the experiment, the images of happiness, surprise, sadness and anger which had shown higher accuracy. Also, emotional recognition rate was mainly decided by the facial element of eyes and a mouth. Through the result of this study, KSFI contents which could be combined AU images was proposed. In this future, KSFI would be helpful contents to improve emotion recognition rate.

Use of Emotion Words by Korean English Learners

  • Lee, Jin-Kyong
    • English Language & Literature Teaching
    • /
    • v.17 no.4
    • /
    • pp.193-206
    • /
    • 2011
  • The purpose of the study is to examine the use of emotion vocabulary by Korean English learners. Three basic emotion fields, pleasure, anger, and fear were selected to elicit the participants' responses. L1 English speakers' data was also collected for comparison. The major results are as follows. First, English learners responded with various inappropriate verb forms like I feel~, I am~ while the majority of English native speaking teachers responded with subjunctive forms like I would feel~. In addition, L2 English learners used mostly simple and coordination sentences. Second, the lexical richness, measured through type/token ratio, was higher in English L1 data than in English L2 data. The proportion of emotion lemmas reflects the lexical richness or the diversity of the emotion words. Lastly, L2 English learners' responses focused on a few typical adjectives like happy, angry and scared. This structural and semantic distinctiveness of Korean English learners' emotion words was discussed from pedagogical perspectives.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

Extracting and Clustering of Story Events from a Story Corpus

  • Yu, Hye-Yeon;Cheong, Yun-Gyung;Bae, Byung-Chull
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3498-3512
    • /
    • 2021
  • This article describes how events that make up text stories can be represented and extracted. We also address the results from our simple experiment on extracting and clustering events in terms of emotions, under the assumption that different emotional events can be associated with the classified clusters. Each emotion cluster is based on Plutchik's eight basic emotion model, and the attributes of the NLTK-VADER are used for the classification criterion. While comparisons of the results with human raters show less accuracy for certain emotion types, emotion types such as joy and sadness show relatively high accuracy. The evaluation results with NRC Word Emotion Association Lexicon (aka EmoLex) show high accuracy values (more than 90% accuracy in anger, disgust, fear, and surprise), though precision and recall values are relatively low.

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF

Autonomic and Frontal Electrocortical Responses That Differentiate Emotions elicited by the Affective Visual Stimulation

  • Sohn, Jin-Hun;Lee, Kyung-Hwa;Park, Mi-Kyung;Eunhey Jang;Estate Sokhadze
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.04a
    • /
    • pp.15-25
    • /
    • 2000
  • Cardiac, respiratory, electrodermal and frontal (F3, F4) EEG responses were analyzed and compared during to slides of International Affective Picture System (IAPS) in the study on 42 students. Physiological responses during 20s of exposure to slides intended to elicit happiness (nurturant and erotic), sadness, disgust, surprise, fear or anger emotions were quite similar and were expressed in heart rate (HR) deceleration, decreased HR variability (HRV), specific SCR, increased non-specific SCR frequency (N-SCR), and EEG changes exhibited in theta increase, alpha-blocking and increased beta activity, and frontal asymmetry. However, some emotions demonstrated variations of the response magnitudes, enabling to differentiate some paris of emotions by several physiological parameters. The profiles showed higher magnitudes of HRV and EEG responses in exciting (i.e., erotic) and higher cardiac and respiratory responses in surprise. The most different pairs were exciting-surprise (by HR, HRV, theta, and alpha asymmetry), exciting-sadness (by theta, alpha, and alpha asymmetry), and exciting-fear (by HRV, theta, F3 alpha, and alpha asymmetry). Nurturant happiness yielded the least differentiation. Differences were found as well within negative emotions, e.g., anger-sadness were differentiated by HRV and theta asymmetry, while disgust-fear by N-SCR and beta asymmetry. Obtained results suggest that magnitudes of profiles of physiological variables differentiate emotions evoked by affective pictures, despite that the patterns of most responses were featured by qualitative similarity in given passive viewing context.

  • PDF

Emotional effect of the Covid-19 pandemic on oral surgery procedures: a social media analysis

  • Altan, Ahmet
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.21 no.3
    • /
    • pp.237-244
    • /
    • 2021
  • Background: This study aimed to analyze Twitter users' emotional tendencies regarding oral surgery procedures before and after the coronavirus disease 2019 (COVID-19) pandemic worldwide. Methods: Tweets posted in English before and after the COVID-19 pandemic were included in the study. Popular tweets in 2019 were searched using the keywords "tooth removal", "tooth extraction", "dental pain", "wisdom tooth", "wisdom teeth", "oral surgery", "oral surgeon", and "OMFS". In 2020, another search was conducted by adding the words "COVID" and "corona" to the abovementioned keywords. Emotions underlying the tweets were analyzed using CrystalFeel - Multidimensional Emotion Analysis. In this analysis, we focused on four emotions: fear, anger, sadness, and joy. Results: A total of 1240 tweets, which were posted before and after the COVID-19 pandemic, were analyzed. There was a statistically significant difference between the emotions' distribution before and after the pandemic (p < 0.001). While the sense of joy decreased after the pandemic, anger and fear increased. There was a statistically significant difference between the emotional valence distributions before and after the pandemic (p < 0.001). While a negative emotion intensity was noted in 52.9% of the messages before the pandemic, it was observed in 74.3% of the messages after the pandemic. A positive emotional intensity was observed in 29.8% of the messages before the pandemic, but was seen in 10.7% of the messages after the pandemic. Conclusion: Infectious diseases, such as COVID-19, may lead to mental, emotional, and behavioral changes in people. Unpredictability, uncertainty, disease severity, misinformation, and social isolation may further increase dental anxiety and fear among people.