• Title/Summary/Keyword: 슬픔감정

Search Result 122, Processing Time 0.031 seconds

The User Inclination Analysis Using Facebook Newsfeed (Facebook 뉴스피드를 이용한 사용자 성향 분석)

  • Jeong, Yoon-Sang;Kim, Kyung-rog;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1476-1478
    • /
    • 2013
  • 최근 페이스북(Facebook), 트위터(Twitter) 등의 SNS(Social Network Service)의 사용자가 급격하게 증가하고 있다. SNS가 발달하면서 언제 어디서나 쉽게 자신의 위치, 현재의 감정들을 온라인상에서 공유한다. 이에 따라 사람의 감정을 표현하는 단어 100여개를 7가지 감정(기쁨, 흥미, 슬픔, 분노, 놀람, 지루함, 통증)으로 분류하였으며[1]. 이를 분석하기 위한 감정 표현 분석기 모듈을 설계하였다. 설계한 모듈을 사용하여 페이스북의 사용자 뉴스피드(News-Feed)를 분석하여 사용자의 성향을 분석하였다.

A Study on the Development of Emotional Content through Natural Language Processing Deep Learning Model Emotion Analysis (자연어 처리 딥러닝 모델 감정분석을 통한 감성 콘텐츠 개발 연구)

  • Hyun-Soo Lee;Min-Ha Kim;Ji-won Seo;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.687-692
    • /
    • 2023
  • We analyze the accuracy of emotion analysis of natural language processing deep learning model and propose to use it for emotional content development. After looking at the outline of the GPT-3 model, about 6,000 pieces of dialogue data provided by Aihub were input to 9 emotion categories: 'joy', 'sadness', 'fear', 'anger', 'disgust', and 'surprise'. ', 'interest', 'boredom', and 'pain'. Performance evaluation was conducted using the evaluation indices of accuracy, precision, recall, and F1-score, which are evaluation methods for natural language processing models. As a result of the emotion analysis, the accuracy was over 91%, and in the case of precision, 'fear' and 'pain' showed low values. In the case of reproducibility, a low value was shown in negative emotions, and in the case of 'disgust' in particular, an error appeared due to the lack of data. In the case of previous studies, emotion analysis was mainly used only for polarity analysis divided into positive, negative, and neutral, and there was a limitation in that it was used only in the feedback stage due to its nature. We expand emotion analysis into 9 categories and suggest its use in the development of emotional content considering it from the planning stage. It is expected that more accurate results can be obtained if emotion analysis is performed by additionally collecting more diverse daily conversations through follow-up research.

Facial Expression Recognition using ICA-Factorial Representation Method (ICA-factorial 표현법을 이용한 얼굴감정인식)

  • Han, Su-Jeong;Kwak, Keun-Chang;Go, Hyoun-Joo;Kim, Sung-Suk;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.371-376
    • /
    • 2003
  • In this paper, we proposes a method for recognizing the facial expressions using ICA(Independent Component Analysis)-factorial representation method. Facial expression recognition consists of two stages. First, a method of Feature extraction transforms the high dimensional face space into a low dimensional feature space using PCA(Principal Component Analysis). And then, the feature vectors are extracted by using ICA-factorial representation method. The second recognition stage is performed by using the Euclidean distance measure based KNN(K-Nearest Neighbor) algorithm. We constructed the facial expression database for six basic expressions(happiness, sadness, angry, surprise, fear, dislike) and obtained a better performance than previous works.

Analysis of children's Reaction in Facial Expression of Emotion (얼굴표정에서 나타나는 감정표현에 대한 어린이의 반응분석)

  • Yoo, Dong-Kwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.70-80
    • /
    • 2013
  • The purpose of this study has placed its meaning in the use as the basic material for the research of the person's facial expressions, by researching and analyzing the visual reactions of recognition of children according to the facial expressions of emotion and by surveying the verbal reactions of boys and girls according to the individual expressions of emotion. The subjects of this study were 108 children at the age of 6 - 8 (55 males, 53 females) who were able to understand the presented research tool, and the response survey conducted twice were used in the method of data collection by individual interviews and self administered questionnaires. The research tool using in the questionnaires were classified into 6 types of joy, sadness, anger, surprise, disgust, and fear which could derive the specific and accurate responses. Regarding children's visual reactions of recognition, both of boys and girls showed the high frequency in the facial expressions of joy, sadness, anger, surprise, and the low frequency in fear, disgust. Regarding verbal reactions, it showed the high frequency in the heuristic responses either to explore or the responds to the impressive parts reminiscent to the facial appearances in all the joy, sadness, anger, surprise, disgust, fear. And it came out that the imaginary responses created new stories reminiscent to the facial expression in surprise, disgust, and fear.

Tonal Characteristics Based on Intonation Pattern of the Korean Emotion Words (감정단어 발화 시 억양 패턴을 반영한 멜로디 특성)

  • Yi, Soo Yon;Oh, Jeahyuk;Chong, Hyun Ju
    • Journal of Music and Human Behavior
    • /
    • v.13 no.2
    • /
    • pp.67-83
    • /
    • 2016
  • This study investigated the tonal characteristics in Korean emotion words by analyzing the pitch patterns transformed from word utterance. Participants were 30 women, ages 19-23. Each participant was instructed to talk about their emotional experiences using 4-syllable target words. A total of 180 utterances were analyzed in terms of the frequency of each syllable using the Praat. The data were transformed into meantones based on the semi-tone scale. When emotion words were used in the middle of a sentence, the pitch pattern was transformed to A3-A3-G3-G3 for '즐거워서(joyful)', C4-D4-B3-A3 for '행복해서(happy)', G3-A3-G3-G3 for '억울해서(resentful)', A3-A3-G3-A3 for '불안해서(anxious)', and C4-C4-A3-G3 for '침울해서(frustrated)'. When the emotion words were used at the end of a sentence, the pitch pattern was transformed to G4-G4-F4-F4 for '즐거워요(joyful)', D4-D4-A3-G3 for '행복해요(happy)', G3-G3-G3-A3 and F3-G3-E3-D3 for '억울해요(resentful)', A3-G3-F3-F3 for '불안해요(anxious)', and A3-A3-F3-F3 for '침울해요(frustrated)'. These results indicate the differences in pitch patterns depending on the conveyed emotions and the position of words in a sentence. This study presents the baseline data on the tonal characteristics of emotion words, thereby suggesting how pitch patterns could be utilized when creating a melody during songwriting for emotional expression.

An acoustic study of feeling information extracting method (음성을 이용한 감정 정보 추출 방법)

  • Lee, Yeon-Soo;Park, Young-B.
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.51-55
    • /
    • 2010
  • Tele-marketing service has been provided through voice media in a several places such as modern call centers. In modern call centers, they are trying to measure their service quality, and one of the measuring method is a extracting speaker's feeling information in their voice. In this study, it is proposed to analyze speaker's voice in order to extract their feeling information. For this purpose, a person's feeling is categorized by analyzing several types of signal parameters in the voice signal. A person's feeling can be categorized in four different states: joy, sorrow, excitement, and normality. In a normal condition, excited or angry state can be major factor of service quality. In this paper, it is proposed to select a conversation with problems by extracting the speaker's feeling information based on pitches and amplitudes of voice.

A Study on Emotion Classification using 4-Channel EEG Signals (4채널 뇌파 신호를 이용한 감정 분류에 관한 연구)

  • Kim, Dong-Jun;Lee, Hyun-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.23-28
    • /
    • 2009
  • This study describes an emotion classification method using two different feature parameters of four-channel EEG signals. One of the parameters is linear prediction coefficients based on AR modelling. Another one is cross-correlation coefficients on frequencies of ${\theta}$, ${\alpha}$, ${\beta}$ bands of FFT spectra. Using the linear predictor coefficients and the cross-correlation coefficients of frequencies, the emotion classification test for four emotions, such as anger, sad, joy, and relaxation is performed with an artificial neural network. The results of the two parameters showed that the linear prediction coefficients have produced the better results for emotion classification than the cross-correlation coefficients of FFT spectra.

  • PDF

Emotion Coding of Sijo "The Light of the Sun in June" by Lee Jin-moon (이진문의 시조 「유월 쬐는 볕」의 감정 코딩)

  • Park, Inkwa
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.3
    • /
    • pp.203-207
    • /
    • 2019
  • This researcher is working on coding emotional codes from the Sijos that were created. So the purpose of this study is to derive the potential for literary therapy with Sijo's emotional coding. This time, we are going to study Lee Jin-moon's Sijo "The Light of the Sun in June." In this Sijo, the code of joy is generated in the first line, the code of joy in the second line, and the code of sadness in the last line. These emotional codes can be combined in different ways in the Emotion Codon. What this combination means is that the human body can be treated with literary emotion. It is believed that we will be able to continue this research and learn a better way of life.

How to Express Emotion: Role of Prosody and Voice Quality Parameters (감정 표현 방법: 운율과 음질의 역할)

  • Lee, Sang-Min;Lee, Ho-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.159-166
    • /
    • 2014
  • In this paper, we examine the role of emotional acoustic cues including both prosody and voice quality parameters for the modification of a word sense. For the extraction of prosody parameters and voice quality parameters, we used 60 pieces of speech data spoken by six speakers with five different emotional states. We analyzed eight different emotional acoustic cues, and used a discriminant analysis technique in order to find the dominant sequence of acoustic cues. As a result, we found that anger has a close relation with intensity level and 2nd formant bandwidth range; joy has a relative relation with the position of 2nd and 3rd formant values and intensity level; sadness has a strong relation only with prosody cues such as intensity level and pitch level; and fear has a relation with pitch level and 2nd formant value with its bandwidth range. These findings can be used as the guideline for find-tuning an emotional spoken language generation system, because these distinct sequences of acoustic cues reveal the subtle characteristics of each emotional state.

Smart Emotion Management System based on multi-biosignal Analysis using Artificial Intelligence (인공지능을 활용한 다중 생체신호 분석 기반 스마트 감정 관리 시스템)

  • Noh, Ayoung;Kim, Youngjoon;Kim, Hyeong-Su;Kim, Won-Tae
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.397-403
    • /
    • 2017
  • In the modern society, psychological diseases and impulsive crimes due to stress are occurring. In order to reduce the stress, the existing treatment methods consisted of continuous visit counseling to determine the psychological state and prescribe medication or psychotherapy. Although this face-to-face counseling method is effective, it takes much time to determine the state of the patient, and there is a problem of treatment efficiency that is difficult to be continuously managed depending on the individual situation. In this paper, we propose an artificial intelligence emotion management system that emotions of user monitor in real time and induced to a table state. The system measures multiple bio-signals based on the PPG and the GSR sensors, preprocesses the data into appropriate data types, and classifies four typical emotional states such as pleasure, relax, sadness, and horror through the SVM algorithm. We verify that the emotion of the user is guided to a stable state by providing a real-time emotion management service when the classification result is judged to be a negative state such as sadness or fear through experiments.