• Title/Summary/Keyword: Sadness Emotion

Search Result 152, Processing Time 0.02 seconds

Study on Heart Rate Variability and PSD Analysis of PPG Data for Emotion Recognition (감정 인식을 위한 PPG 데이터의 심박변이도 및 PSD 분석)

  • Choi, Jin-young;Kim, Hyung-shin
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.103-112
    • /
    • 2018
  • In this paper, we propose a method of recognizing emotions using PPG sensor which measures blood flow according to emotion. From the existing PPG signal, we use a method of determining positive emotions and negative emotions in the frequency domain through PSD (Power Spectrum Density). Based on James R. Russell's two-dimensional prototype model, we classify emotions as joy, sadness, irritability, and calmness and examine their association with the magnitude of energy in the frequency domain. It is significant that this study used the same PPG sensor used in wearable devices to measure the top four kinds of emotions in the frequency domain through image experiments. Through the questionnaire, the accuracy, the immersion level according to the individual, the emotional change, and the biofeedback for the image were collected. The proposed method is expected to be various development such as commercial application service using PPG and mobile application prediction service by merging with context information of existing smart phone.

Exploration of deep learning facial motions recognition technology in college students' mental health (딥러닝의 얼굴 정서 식별 기술 활용-대학생의 심리 건강을 중심으로)

  • Li, Bo;Cho, Kyung-Duk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.333-340
    • /
    • 2022
  • The COVID-19 has made everyone anxious and people need to keep their distance. It is necessary to conduct collective assessment and screening of college students' mental health in the opening season of every year. This study uses and trains a multi-layer perceptron neural network model for deep learning to identify facial emotions. After the training, real pictures and videos were input for face detection. After detecting the positions of faces in the samples, emotions were classified, and the predicted emotional results of the samples were sent back and displayed on the pictures. The results show that the accuracy is 93.2% in the test set and 95.57% in practice. The recognition rate of Anger is 95%, Disgust is 97%, Happiness is 96%, Fear is 96%, Sadness is 97%, Surprise is 95%, Neutral is 93%, such efficient emotion recognition can provide objective data support for capturing negative. Deep learning emotion recognition system can cooperate with traditional psychological activities to provide more dimensions of psychological indicators for health.

Korean Emotional Speech and Facial Expression Database for Emotional Audio-Visual Speech Generation (대화 영상 생성을 위한 한국어 감정음성 및 얼굴 표정 데이터베이스)

  • Baek, Ji-Young;Kim, Sera;Lee, Seok-Pil
    • Journal of Internet Computing and Services
    • /
    • v.23 no.2
    • /
    • pp.71-77
    • /
    • 2022
  • In this paper, a database is collected for extending the speech synthesis model to a model that synthesizes speech according to emotions and generating facial expressions. The database is divided into male and female data, and consists of emotional speech and facial expressions. Two professional actors of different genders speak sentences in Korean. Sentences are divided into four emotions: happiness, sadness, anger, and neutrality. Each actor plays about 3300 sentences per emotion. A total of 26468 sentences collected by filming this are not overlap and contain expression similar to the corresponding emotion. Since building a high-quality database is important for the performance of future research, the database is assessed on emotional category, intensity, and genuineness. In order to find out the accuracy according to the modality of data, the database is divided into audio-video data, audio data, and video data.

Relationship between emotions and emoticons in adolescents in digital communication environment (디지털 커뮤니케이션 환경에서 청소년들의 감정과 이모티콘의 관계)

  • Kim, Yoon-Ji;Kang, Dongmug;Kim, Ju-Young;Kim, Jong-Eun
    • Health Communication
    • /
    • v.12 no.1
    • /
    • pp.51-72
    • /
    • 2017
  • Purpose: Adolescents use emoticons to express their emotions in an online environment. Hence, medical experts can understand the emotions of adolescents by emoticons. The goal of this study was to investigate the relationship between various emotions and emoticons among the Korean adolescents. Methods: The questionnaire survey was conducted between September 1 and 30, 2014, involving 3,272 students in elementary schools, middle schools, and high schools affiliated in the Department of Education of the metropolitan city of Busan. A total of 1,717 students responded to the survey. The participants consisted of 806 males (46.9%), and 911 females (53.1%). Among these, there were 557 elementary school students (32.4%), 617 middle school students (35.9%), and 543 high school students (31.6%). A social networking analysis was conducted using NodeXL. Results: The frequency of emoticon use among adolescents runs in the order of joy, sadness, fear, surprise, anger, disgust, and then depression. Elementary school females mainly use emoticons to express joy; middle school females use emoticons to express sadness, surprise, anger, disgust, and depression; and high school females use emoticons to express fear. Age- and gender-specific emoticon networks were visualized by using the Haren-Korel fast multiscale algorithm. Commonly used emoticons by age and gender were expressed in the networks. Results of age- and gender-specific emoticon networks visualization show similar results of centrality of seven emoticons. Conclusion: In the digital communication environment, emoticons could be used to catch the emotions of adolescents in Korea.

Influencing Factors on the Emotional Expression in Weibo Hot News - Focusing on 'Restaurant Collapse in Linfen City, Shanxi Province' - (웨이보 인기뉴스에 관한 감정표현에 영향을 미치는 요인 - '중국 산시성 린펀시 반점 붕괴 사건'을 중심으로 -)

  • Lu, Zhiqin;Nam, Inyong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.5
    • /
    • pp.105-117
    • /
    • 2021
  • This study examined the factors that influence the emotional expression in comments on the hot news about the 'Restaurant Collapse in Linfen City, Shanxi Province' published in Sina Weibo.. As a result of the study, first, there were differences in emotional expression according to gender. Women expressed stronger anger, disappointment, sadness, and condemnation than men. Second, the intensity of emotional expression of users in the eastern region was significantly higher than that of users in the central and western region. Third, the greater the number of Weibo, the total number of blogs where users participated in comments and posted emotional expressions, the stronger the emotional expression was. Fourth, unauthenticated users showed stronger emotional expressions of disappointment and sadness than authenticated users. The results of this study present implications for the factors influencing emotional expression on hot news. This study is meaningful in that it can be compared with social networks such as Twitter and Facebook in the West by looking at the factors that influence emotional expression in the process of online public opinion formation in China, and also meaningful in that a big data analysis method was used in online news analysis.

A fMRI Meta-analysis on Neuroimaging Studies of Basic Emotions (기본정서 뇌 영상 연구의 fMRI 메타분석)

  • Kim, Gwang-Su;Han, Mi-Ra;Bak, Byung-Gee
    • Science of Emotion and Sensibility
    • /
    • v.20 no.4
    • /
    • pp.15-30
    • /
    • 2017
  • The purpose of this study was to verify the basic emotion theory based on the emotion-related research using functional brain imaging technology. For this purpose, a meta-analysis on the functional magnetic resonance imaging (fMRI) studies was performed. Six individual emotions-joy, happiness, fear, anger, disgust, sadness-were selected. In order to collect the fMRI data of individual emotions, we searched the electronic journals such as Medline, PsychInfo, PubMed for the past 10 years. fMRI experiment data aimed at healthy subjects for 6 emotions were collected, and only studies reported in Talairach or MNI standard coordinate system were included. In order to eliminate the difference between Talairach and MNI coordinate systems, we analyzed fMRI data based on the Talairach coordinate system. A meta-analysis using GingerALE 2.3 program adopting the activation likelihood estimates (ALE) techniques was performed. In this study, we confirmed that the individual emotions are associated with consistent and distinguishable regional brain responses within the framework of the basic emotion theory. The conclusion of this study of the brain areas associated with each individual emotional reaction was substantially consistent with the results of existing review articles. Finally, the limitations of this study and some suggestions for the future research were presented.

Analysis of Voice Color Similarity for the development of HMM Based Emotional Text to Speech Synthesis (HMM 기반 감정 음성 합성기 개발을 위한 감정 음성 데이터의 음색 유사도 분석)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5763-5768
    • /
    • 2014
  • Maintaining a voice color is important when compounding both the normal voice because an emotion is not expressed with various emotional voices in a single synthesizer. When a synthesizer is developed using the recording data of too many expressed emotions, a voice color cannot be maintained and each synthetic speech is can be heard like the voice of different speakers. In this paper, the speech data was recorded and the change in the voice color was analyzed to develop an emotional HMM-based speech synthesizer. To realize a speech synthesizer, a voice was recorded, and a database was built. On the other hand, a recording process is very important, particularly when realizing an emotional speech synthesizer. Monitoring is needed because it is quite difficult to define emotion and maintain a particular level. In the realized synthesizer, a normal voice and three emotional voice (Happiness, Sadness, Anger) were used, and each emotional voice consists of two levels, High/Low. To analyze the voice color of the normal voice and emotional voice, the average spectrum, which was the measured accumulated spectrum of vowels, was used and the F1(first formant) calculated by the average spectrum was compared. The voice similarity of Low-level emotional data was higher than High-level emotional data, and the proposed method can be monitored by the change in voice similarity.

Funology Body : Classified Application System Based on Funology and Philosophy of the Human Body (Funology Body : Funology와 '몸의 철학' 이론을 바탕으로 한 어플리케이션 분류 검색 체계 연구)

  • Kihl, Tae-Suk;Jang, Ju-No;Ju, Hyun-Sun;Kwon, Ji-Eun
    • Science of Emotion and Sensibility
    • /
    • v.13 no.4
    • /
    • pp.635-646
    • /
    • 2010
  • This article focuses on Funology and a new classified application system based on concept of language and thought which are formed by body experience. It is defined by Funology Body as that. Funology Body is classifying and searching system which are consisted of a body, world (environment), and device tool. The body is sectioned by Brain, Eyes, Ears, Nose, Mouth, Hand, Torso, Feet, and Heart according as parts of the human body. This allows intuiting and experience searching as making classified system connected to the application relationship with concept of an each part of body. The Brain of the body is sub-classified by Book, Account, Business, Memory, Education, Search, and Aphorism to imply the application with thought. The Eyes take Video, Photography, and Broadcast for visibility. The Ears is categorized as Music, Instrument, Audio, and Radio for hearing. The Nose gets Perfume, Smell for olfactory sense. The Mouth is sectioned by Food, SNS, Chatting, Email, and Blog for eating and communication. The Hand sorts into Games, Kits, and Editing to handle, create, and play. The Torso is grouped by Health, Medical, Dance, Sport, Fashion, and Testyuorself related by protecting internal and meaning of the body core. The Feet is classified by Travel, Transportation, Map, and Outdoor for moving and concept of expanding the terrain. The Heart is consisted of Fear, Anger, Joy, Sadness, Acceptance, Disgust, Expectation, and Surprise for a human feeling. Beyond that, the World takes News, Time, Weather, Map, Fortune, and Shop, and Device tool gets Interface, Utilities. The Funology Body has a unique characteristic of giving intuitive and sensuous pleasure and reflection of users' attitude and taste for changing application flexibly.

  • PDF

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.

Differentiation of children' five emotions with cardiovascular reactivity parameters (심혈관계 생리반응을 이용한 아동정서 구분)

  • Jang, Eun-Hye;Lee, Kyung-Hwa;Sohn, Sun-Ju;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.317-324
    • /
    • 2009
  • The aim of this study was to determine whether or not cardiovascular reactivity parameters serve as good indicators in identifying differential emotion in children. The study particularly focused on five emotions(i.e., happiness, sadness, anger, stress, and boredom), thus, study participants were introduced to a combination of music, color, stories, and dolls to induce complex emotions. During the experiment, corresponding cardiovascular reactivity in response to the conditioned stimuli were recorded on physiological parameters including HR, RSA, HRV, HF HRV, LF HRV, and FPV. After the cardiovascular reactivity responses were measured, participants rated on the types and intensity of emotions they had experienced during the emotional stimuli exposure. Results on psychological response show that four emotions except for stress were appropriately and effectively induced participants by emotional stimuli. Findings of physiological responses suggest that, except for RSA, all of the physiological indicators show significant differences among five emotions. This indicates that children' emotions can be measured and differentiated by cardiovascular reactivity, or in other words, emotion specific responses have the ability to distinguish different emotions in children.

  • PDF