• Title/Summary/Keyword: anger and sadness

Search Result 147, Processing Time 0.029 seconds

Correlation between Stories and Emotional Responses for American Movies (영화 스토리와 관객 감성반응과의 상관성에 대한 연구)

  • Woo, Jeong-Gueon
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.7
    • /
    • pp.13-19
    • /
    • 2021
  • While watching the movie, the audience shows various emotional reactions. Emotional reactions such as sadness and anger, joy and anger appear depending on the storyline of the film. This aspect can be seen through the audience's brain wave response. This study is to examine the relationship between the movie story development and the movie story development through brain wave measurement of the emotional reaction of the audience in situations and events occurring in the movie development. Four American films, which represent each genre and are well known to many people, were selected for the study. These are of the adventure genre, of the animation genre, of the action genre, and of the drama genre. In order to measure the emotional response of these movies, four cases were set centered on the PPG of EEG and analyzed as a time series graph pattern. It can be seen that the emotional response on the graph has a certain relationship with the story development. It is expected that this study will help in selecting a genre when making a movie in the future, especially when deciding how to compose and develop a story, and it will help to induce the emotions of the audience.

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.

The Role of Media Use and Emotions in Risk Perception and Preventive Behaviors Related to COVID-19 in South Korea

  • Kim, Sungjoong;Cho, Sung Kyum;LoCascio, Sarah Prusoff
    • Asian Journal for Public Opinion Research
    • /
    • v.8 no.3
    • /
    • pp.297-323
    • /
    • 2020
  • The relationship between compliance with behaviors recommended to prevent the spread of COVID-19 and media exposure, negative emotions, and risk perception was examined using regression analyses of data from KAMOS, a nationally representative survey of South Korean adults. The strongest predictor of preventive behaviors in general was negative emotions, which had the largest βh (.22) among the independent variables considered. The eight negative emotions, identified using factor analysis of a series of 11 emotions, were anger, annoyance, fear, sadness, anxiety, insomnia, helplessness, and stress. Negative emotions themselves were influenced most strongly by the respondent's anxiety over social safety (βe=.286), followed by prediction of COVID-10 spread (β=.121, p<.001) and perceived risk of COVID-19 infection (β=.70, p=.023). Females (β=-.134) and those who felt less healthy (βo=-.097) experienced more negative emotions. Media exposure and increased media exposure both have significant relationships with negative emotions and both a direct and indirect impact on the adoption of preventive measures. Women, older people, and healthier people perceived greater risks and engaged in more preventive behaviors than their counterparts.

Sound-based Emotion Estimation and Growing HRI System for an Edutainment Robot (에듀테인먼트 로봇을 위한 소리기반 사용자 감성추정과 성장형 감성 HRI시스템)

  • Kim, Jong-Cheol;Park, Kui-Hong
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.1
    • /
    • pp.7-13
    • /
    • 2010
  • This paper presents the sound-based emotion estimation method and the growing HRI (human-robot interaction) system for a Mon-E robot. The method of emotion estimation uses the musical element based on the law of harmony and counterpoint. The emotion is estimated from sound using the information of musical elements which include chord, tempo, volume, harmonic and compass. In this paper, the estimated emotions display the standard 12 emotions including Eckman's 6 emotions (anger, disgust, fear, happiness, sadness, surprise) and the opposite 6 emotions (calmness, love, confidence, unhappiness, gladness, comfortableness) of those. The growing HRI system analyzes sensing information, estimated emotion and service log in an edutainment robot. So, it commands the behavior of the robot. The growing HRI system consists of the emotion client and the emotion server. The emotion client estimates the emotion from sound. This client not only transmits the estimated emotion and sensing information to the emotion server but also delivers response coming from the emotion server to the main program of the robot. The emotion server not only updates the rule table of HRI using information transmitted from the emotion client and but also transmits the response of the HRI to the emotion client. The proposed system was applied to a Mon-E robot and can supply friendly HRI service to users.

Face Recognition using Emotional Face Images and Fuzzy Fisherface (감정이 있는 얼굴영상과 퍼지 Fisherface를 이용한 얼굴인식)

  • Koh, Hyun-Joo;Chun, Myung-Geun;Paliwal, K.K.
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.94-98
    • /
    • 2009
  • In this paper, we deal with a face recognition method for the emotional face images. Since the face recognition is one of the most natural and straightforward biometric methods, there have been various research works. However, most of them are focused on the expressionless face images and have had a very difficult problem if we consider the facial expression. In real situations, however, it is required to consider the emotional face images. Here, three basic human emotions such as happiness, sadness, and anger are investigated for the face recognition. And, this situation requires a robust face recognition algorithm then we use a fuzzy Fisher's Linear Discriminant (FLD) algorithm with the wavelet transform. The fuzzy Fisherface is a statistical method that maximizes the ratio of between-scatter matrix and within-scatter matrix and also handles the fuzzy class information. The experimental results obtained for the CBNU face databases reveal that the approach presented in this paper yields better recognition performance in comparison with the results obtained by other recognition methods.

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

A Research of Optimized Metadata Extraction and Classification of in Audio (미디어에서의 오디오 메타데이터 최적화 추출 및 분류 방안에 대한 연구)

  • Yoon, Min-hee;Park, Hyo-gyeong;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.147-149
    • /
    • 2021
  • Recently, the rapid growth of the media market and the expectations of users have been increasing. In this research, tags are extracted through media-derived audio and classified into specific categories using artificial intelligence. This category is a type of emotion including joy, anger, sadness, love, hatred, desire, etc. We use JupyterNotebook to conduct the corresponding study, analyze voice data using the LiBROSA library within JupyterNotebook, and use Neural Network using keras and layer models.

  • PDF

An EEG Study of Emotion Using the International Affective Picture System (국제정서사진체계 ( IAPS ) 를 사용하여 유발된 정서의 뇌파 연구)

  • 이임갑;김지은;이경화;손진훈
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1997.11a
    • /
    • pp.224-227
    • /
    • 1997
  • The International Affective Picture System (IAPS) developed by Lang and colleagues[1] is a world-widely adopted tool in studices relating a variety of physiological indices to subjective emotions induced by the presentation of standardized pictures of which subjective ratings are well established in the three dimensions of pleasure, arousal and dominance. In the present stuey we investigated whether distinctive EEG characteristics for six discrete emotions can be discernible using 12 IAPS pictures that scored highest subjective ratings for one of the 6 categorical emotions, i. e., happiness, sadness, fear, anger, disgust, and surprise (Two slides for each emotion). These pictures as visual stimuli were randomly given to 38 right-handed college students (20-26 years old) with 30 sec of exposure time and 30sec of inter-stimulus interval for each picture while EEG signals were recorded from F3, F4, O1, and O2 referenced to linked ears. The FFT technoque were used to analyze the acquired EEG data. There were significant differences in RP value changes of EEG bands, most prominent in theta, between positive positive and negative emotions, and partial also among negative emotions. This result is in agreement with previous studies[2, 3]. However, it requires further studied to decided whether IAPS could be a useful tool for catigorical approaches to emotion in addition to its traditional uwe, namely dimensional to emotion.

  • PDF

Design of Model to Recognize Emotional States in a Speech

  • Kim Yi-Gon;Bae Young-Chul
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.1
    • /
    • pp.27-32
    • /
    • 2006
  • Verbal communication is the most commonly used mean of communication. A spoken word carries a lot of informations about speakers and their emotional states. In this paper we designed a model to recognize emotional states in a speech, a first phase of two phases in developing a toy machine that recognizes emotional states in a speech. We conducted an experiment to extract and analyse the emotional state of a speaker in relation with speech. To analyse the signal output we referred to three characteristics of sound as vector inputs and they are the followings: frequency, intensity, and period of tones. Also we made use of eight basic emotional parameters: surprise, anger, sadness, expectancy, acceptance, joy, hate, and fear which were portrayed by five selected students. In order to facilitate the differentiation of each spectrum features, we used the wavelet transform analysis. We applied ANFIS (Adaptive Neuro Fuzzy Inference System) in designing an emotion recognition model from a speech. In our findings, inference error was about 10%. The result of our experiment reveals that about 85% of the model applied is effective and reliable.

Development of user activity type and recognition technology using LSTM (LSTM을 이용한 사용자 활동유형 및 인식기술 개발)

  • Kim, Young-kyun;Kim, Won-jong;Lee, Seok-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.360-363
    • /
    • 2018
  • Human activity is influenced by various factors, from individual physical features such as vertebral flexion and pelvic distortion to feelings such as joy, anger, and sadness. However, the nature of these behaviors changes over time, and behavioral characteristics do not change much in the short term. The activity data of a person has a time series characteristic that changes with time and a certain regularity for each action. In this study, we applied LSTM, a kind of cyclic neural network to deal with time - series characteristics, to the technique of recognizing activity type and improved recognition rate of activity type by measuring time and parameter optimization of components of LSTM model.

  • PDF