• Title/Summary/Keyword: 슬픔감정

Search Result 123, Processing Time 0.021 seconds

Emotion-based Gesture Stylization For Animated SMS (모바일 SMS용 캐릭터 애니메이션을 위한 감정 기반 제스처 스타일화)

  • Byun, Hae-Won;Lee, Jung-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.802-816
    • /
    • 2010
  • To create gesture from a new text input is an important problem in computer games and virtual reality. Recently, there is increasing interest in gesture stylization to imitate the gestures of celebrities, such as announcer. However, no attempt has been made so far to stylize a gestures using emotion such as happiness and sadness. Previous researches have not focused on real-time algorithm. In this paper, we present a system to automatically make gesture animation from SMS text and stylize the gesture from emotion. A key feature of this system is a real-time algorithm to combine gestures with emotion. Because the system's platform is a mobile phone, we distribute much works on the server and client. Therefore, the system guarantees real-time performance of 15 or more frames per second. At first, we extract words to express feelings and its corresponding gesture from Disney video and model the gesture statistically. And then, we introduce the theory of Laban Movement Analysis to combine gesture and emotion. In order to evaluate our system, we analyze user survey responses.

Analysis of Gait Characteristics of Walking in Various Emotion Status (다양한 감정 상태에서의 보행 특징 분석)

  • Dang, Van Chien;Tran, Trung Tin;Kim, Jong-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.477-481
    • /
    • 2014
  • Human has various types of emotions which affect speculation, judgement, activity, and the like at the moment. Specifically, walking is also affected by emotions, because one's emotion status can be easily inferred by his or her walking style. The present research on biped walking with humanoid robots is mainly focused on stable walking irrespective of ground condition. For effective human-robot interaction, however, walking pattern needs to be changed depending on the emotion status of the robot. This paper provides analysis and comparison of gait experiment data for the men and women in four representative emotion states, i.e., joy, sorrow, ease, and anger, which was acquired by a gait analysis system. The data and analysis results provided in this paper will be referenced to emotional biped walking of a humanoid robot.

Face Emotion Recognition by Fusion Model based on Static and Dynamic Image (정지영상과 동영상의 융합모델에 의한 얼굴 감정인식)

  • Lee Dae-Jong;Lee Kyong-Ah;Go Hyoun-Joo;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.573-580
    • /
    • 2005
  • In this paper, we propose an emotion recognition using static and dynamic facial images to effectively design human interface. The proposed method is constructed by HMM(Hidden Markov Model), PCA(Principal Component) and wavelet transform. Facial database consists of six basic human emotions including happiness, sadness, anger, surprise, fear and dislike which have been known as common emotions regardless of nation and culture. Emotion recognition in the static images is performed by using the discrete wavelet. Here, the feature vectors are extracted by using PCA. Emotion recognition in the dynamic images is performed by using the wavelet transform and PCA. And then, those are modeled by the HMM. Finally, we obtained better performance result from merging the recognition results for the static images and dynamic images.

Measurement of Emotional Transition Using Physiological Signals of Audiences (관객의 생체신호 분석을 통한 감성 변화)

  • Kim, Wan-Suk;Ham, Jun-Seok;Sohn, Choong-Yeon;Yun, Jae-Sun;Lim, Chan;Ko, Il-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.8
    • /
    • pp.168-176
    • /
    • 2010
  • Audience observing visual media with care experience lots of emotional transition according to characteristics of media. Enjoy, sadness, surprising, etc, a variety of emotional state of audiences is often arranged by James Russell's 'A circumplex model of affect' utilized on psychology. Especially, in various emotions, 'Uncanny' mentioned by Sigmund Freud is represented a sharp medium existing in a crack of clearly emotional conception. Uncanny phenomenon is an emotional state of changing from unpleasant to pleasant on an audience observing visual media is been aware of immoral media generally, therefore, because this is a positive state on a social taboo, we need to analyze with a scientific analysis clearly. Therefore, this study will organize James Russell's 'A circumplex model of affect' and uncanny phenomenon, will be progressed to establish a hypothesis about a state of uncanny on audiences observing visual media and analyze results of the physiological signals experiment based on ECG(Electronic Cardiogram), GSR(Galvanic Skin Response) signals with distribution, distance, and moving time in a circumplex model of affect.

Analyzing facial expression of a learner in e-Learning system (e-Learning에서 나타날 수 있는 학습자의 얼굴 표정 분석)

  • Park, Jung-Hyun;Jeong, Sang-Mok;Lee, Wan-Bok;Song, Ki-Sang
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.160-163
    • /
    • 2006
  • If an instruction system understood the interest and activeness of a learner in real time, it could provide some interesting factors when a learner is tired of learning. It could work as an adaptive tutoring system to help a learner to understand something difficult to understand. Currently the area of the facial expression recognition mainly deals with the facial expression of adults focusing on anger, hatred, fear, sadness, surprising and gladness. These daily facial expressions couldn't be one of expressions of a learner in e-Learning. They should first study the facial expressions of a learner in e-Learning to recognize the feeling of a learner. Collecting as many expression pictures as possible, they should study the meaning of each expression. This study, as a prior research, analyzes the feelings of learners and facial expressions of learners in e-Learning in relation to the feelings to establish the facial expressions database.

  • PDF

English Title - A Study of Emotional Dimension for Mixed Feelings (복합적 감정(mixed feelings)에 대한 감정차원 연구)

  • Han, Eui-Hwan;Cha, Hyung-Tai
    • Science of Emotion and Sensibility
    • /
    • v.16 no.4
    • /
    • pp.469-480
    • /
    • 2013
  • In this paper, we propose new method to reduce variance and express mixed feelings in Russell's emotional dimension(A Circumplex model). A Circumplex model shows mean and variance of emotions(joy, sad, happy, enjoy et. al.) in PAD(Pleasure, Arousal, Dominace, et. al.) dimension using self-diagnostic method(SAM: Self-Assessment-Manikin). But other researchers consistently insisted that Russell's model had two problems. First, data(emotional words) gathered by Russell's method have too big variance. So, it is difficult to separate valid value. Second, Russell's model can not properly represent mixed feelings because it has structural problem(It has a single Pleasure dimension). In order to solve these problems, we change survey methods, so that we reduce value of variance. And then we conduct survey(which can induce mixed feelings) to prove Positive/Negative(Pleasure) part in emotion and confirm that Russell's model can be used to express mixed feelings. Using this method, we can obtain high reliability and accuracy of data and Russell's model can be applied in many other fields such as bio-signal, mixed feelings, realistic broadcasting, et. al.

Analysis of Voice Color Similarity for the development of HMM Based Emotional Text to Speech Synthesis (HMM 기반 감정 음성 합성기 개발을 위한 감정 음성 데이터의 음색 유사도 분석)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.9
    • /
    • pp.5763-5768
    • /
    • 2014
  • Maintaining a voice color is important when compounding both the normal voice because an emotion is not expressed with various emotional voices in a single synthesizer. When a synthesizer is developed using the recording data of too many expressed emotions, a voice color cannot be maintained and each synthetic speech is can be heard like the voice of different speakers. In this paper, the speech data was recorded and the change in the voice color was analyzed to develop an emotional HMM-based speech synthesizer. To realize a speech synthesizer, a voice was recorded, and a database was built. On the other hand, a recording process is very important, particularly when realizing an emotional speech synthesizer. Monitoring is needed because it is quite difficult to define emotion and maintain a particular level. In the realized synthesizer, a normal voice and three emotional voice (Happiness, Sadness, Anger) were used, and each emotional voice consists of two levels, High/Low. To analyze the voice color of the normal voice and emotional voice, the average spectrum, which was the measured accumulated spectrum of vowels, was used and the F1(first formant) calculated by the average spectrum was compared. The voice similarity of Low-level emotional data was higher than High-level emotional data, and the proposed method can be monitored by the change in voice similarity.

Sociality of Emotions through Communition of Drama (드라마의 소통을 통해 본 감정의 사회성)

  • Paik, Hoon-Kie
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.7
    • /
    • pp.25-38
    • /
    • 2019
  • For many years, emotions have been regarded as dangerous as well as obstructive to reason and rationality. Even in the history of dramatic genres based on human behavior, emotions are often portrayed as errors or flaws that lead to the collapse of characters, even though they are presented as powerful motives to evoke the characters' actions. Based on the development of the cognitive science field today, it is revealed how emotion is important for human thinking, judgment and action, and how closely reason and emotion are combined. This study examines emotions dealt with in drama genres on the basis of the sociality of emotions. Emotions also play a crucial role in the change and judgment of the audience of drama. This study examines the sociality of emotions and looks at the social aspect of individual emotions through the movie as a text. Just as horror movies attempt to communicate using and expanding feelings of fear, the movie uses the emotions of 'Sad' and 'Losing' to try to expand and share it. And I look into the mechanism in which the emotions of the characters in drama are transmitted to the audience in the form of empathy.

Quantifying and Analyzing Vocal Emotion of COVID-19 News Speech Across Broadcasters in South Korea and the United States Based on CNN (한국과 미국 방송사의 코로나19 뉴스에 대해 CNN 기반 정량적 음성 감정 양상 비교 분석)

  • Nam, Youngja;Chae, SunGeu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.2
    • /
    • pp.306-312
    • /
    • 2022
  • During the unprecedented COVID-19 outbreak, the public's information needs created an environment where they overwhelmingly consume information on the chronic disease. Given that news media affect the public's emotional well-being, the pandemic situation highlights the importance of paying particular attention to how news stories frame their coverage. In this study, COVID-19 news speech emotion from mainstream broadcasters in South Korea and the United States (US) were analyzed using convolutional neural networks. Results showed that neutrality was detected across broadcasters. However, emotions such as sadness and anger were also detected. This was evident in Korean broadcasters, whereas those emotions were not detected in the US broadcasters. This is the first quantitative vocal emotion analysis of COVID-19 news speech. Overall, our findings provide new insight into news emotion analysis and have broad implications for better understanding of the COVID-19 pandemic.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.