• Title/Summary/Keyword: emotion prediction

Search Result 80, Processing Time 0.027 seconds

An Applicable Verb Prediction in Augmentative Communication System for Korean Language Disorders (언어장애인용 문장발생장치에 적용 가능한 동사예측)

  • 이은실;홍승홍;민홍기
    • Science of Emotion and Sensibility
    • /
    • v.3 no.1
    • /
    • pp.25-32
    • /
    • 2000
  • 본 논문에서는 언어장애인용 문장발생장치의 통신율을 증진시키기 위한 처리방안으로 신경망을 이용하여 문장발생장치에 동사예측을 적용하는 방법을 제안하였다. 각 단어들은 구문론과 의미론에 따른 정보벡터로 표현되며, 언어처리는 전통적으로 사전을 포함하는 것과는 달리, 상태공간에서 다양한 영역으로 분류되어 개념적으로 유사한 단어는 상태공간에서의 위치를 통하여 알게 된다. 사용자가 심볼을 누르면 심볼에 해당하는 단어는 상태공간에서의 위치를 찾아가며, 신경망 학습을 통해 동사를 예측하였고 그 결과 제한된 공간 내에서 약 20% 통신율 증진을 가져올 수 있었다.

  • PDF

A Study on Emotion Classification using 4-Channel EEG Signals (4채널 뇌파 신호를 이용한 감정 분류에 관한 연구)

  • Kim, Dong-Jun;Lee, Hyun-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.23-28
    • /
    • 2009
  • This study describes an emotion classification method using two different feature parameters of four-channel EEG signals. One of the parameters is linear prediction coefficients based on AR modelling. Another one is cross-correlation coefficients on frequencies of ${\theta}$, ${\alpha}$, ${\beta}$ bands of FFT spectra. Using the linear predictor coefficients and the cross-correlation coefficients of frequencies, the emotion classification test for four emotions, such as anger, sad, joy, and relaxation is performed with an artificial neural network. The results of the two parameters showed that the linear prediction coefficients have produced the better results for emotion classification than the cross-correlation coefficients of FFT spectra.

  • PDF

A Comparative Study of Color Emotion and Preference of Koreans and Chinese for Two-Color Combination by Naturally Dyed Fabrics with Persimmon and Indigo (감과 쪽의 천연염색 배색직물의 색채감성과 색채선호도에 대한 한국인과 중국인의 비교 연구)

  • Yi, Eunjou;Lee, Sang Hee;Choi, Jongmyoung
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.46 no.1
    • /
    • pp.33-48
    • /
    • 2022
  • This study was performed to compare the color emotion and preference of Koreans and Chinese for a two-color combination by dyeing cotton fabric with persimmon and indigo and to establish prediction models of color preference. Nine specimens prepared by combining two different colored fabrics (persimmon and indigo) were evaluated for color emotion and preference by Korean and Chinese groups of female college students. Koreans described most specimens as natural and traditional, whereas the Chinese described them as more pleasant and elegant as well as warmer and lighter than Koreans did. The contrast tone was the most preferred combination by both groups, whereas it was perceived as more modern and less warm by Koreans. Relationships between physical color variables and color emotions were quantified; these relationships were applied to establish a prediction model of color preference with tone combination types for each group. These results could help in making the design of fashion textiles more preference- and emotion-oriented for Korean and Chinese consumers.

Effect of Colorimetric Characteristics and Tone Combination on Color Emotion Factors of Naturally Dyed Color Combination Fabrics -Focus on Yellowish and Reddish Fabrics- (천연염색 배색직물의 색채 특성과 톤 조합이 색채감성요인에 미치는 영향 -황색과 적색계열을 중심으로-)

  • Lee, An Rye;Sarmandakh, Badmaanyambuu;Kang, Eun Young;Yi, Eunjou
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.36 no.10
    • /
    • pp.1028-1039
    • /
    • 2012
  • This study identified color emotion factors of naturally dyed two-color combination fabrics focused on yellowish and reddish shades to examine the relationship between color emotion factors and physical colorimetric variables (as well as tone combination groups) to provide prediction models for color emotion factors of naturally dyed fabrics with a two-color combination. Each of eight different stimuli were prepared by paring two pieces of silk fabrics colored in red and yellow by natural dyeing respectively; in addition, their color emotion descriptors were evaluated by human subjects using semantic deferential scales. 'Joyful', 'Natural', 'Classical', and 'Soft' were extracted as color emotion factors for the naturally dyed yellowish-reddish combination fabrics. They were found to be significantly affected by physical colorimetric variables such as CIE $C^*$ and $L^*$ and tone combination groups. Finally, prediction models for all color emotion factors were established using physical colorimetric variables and tone combination groups that led to the conclusion that they could be applicable to design a color combination for naturally dyed fashion fabrics.

Sentiment Prediction using Emotion and Context Information in Unstructured Documents (비정형 문서에서 감정과 상황 정보를 이용한 감성 예측)

  • Kim, Jin-Su
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.40-46
    • /
    • 2020
  • With the development of the Internet, users share their experiences and opinions. Since related keywords are used witho0ut considering information such as the general emotion or genre of an unstructured document such as a movie review, the sensitivity accuracy according to the appropriate emotional situation is impaired. Therefore, we propose a system that predicts emotions based on information such as the genre to which the unstructured document created by users belongs or overall emotions. First, representative keyword related to emotion sets such as Joy, Anger, Fear, and Sadness are extracted from the unstructured document, and the normalized weights of the emotional feature words and information of the unstructured document are trained in a system that combines CNN and LSTM as a training set. Finally, by testing the refined words extracted through movie information, morpheme analyzer and n-gram, emoticons, and emojis, it was shown that the accuracy of emotion prediction using emotions and F-measure were improved. The proposed prediction system can predict sentiment appropriately according to the situation by avoiding the error of judging negative due to the use of sad words in sad movies and scary words in horror movies.

Emotion Detecting Method Based on Various Attributes of Human Voice

  • MIYAJI Yutaka;TOMIYAMA Ken
    • Science of Emotion and Sensibility
    • /
    • v.8 no.1
    • /
    • pp.1-7
    • /
    • 2005
  • This paper reports several emotion detecting methods based on various attributes of human voice. These methods have been developed at our Engineering Systems Laboratory. It is noted that, in all of the proposed methods, only prosodic information in voice is used for emotion recognition and semantic information in voice is not used. Different types of neural networks(NNs) are used for detection depending on the type of voice parameters. Earlier approaches separately used linear prediction coefficients(LPCs) and time series data of pitch but they were combined in later studies. The proposed methods are explained first and then evaluation experiments of individual methods and their performances in emotion detection are presented and compared.

  • PDF

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

Speech Emotion Recognition Based on GMM Using FFT and MFB Spectral Entropy (FFT와 MFB Spectral Entropy를 이용한 GMM 기반의 감정인식)

  • Lee, Woo-Seok;Roh, Yong-Wan;Hong, Hwang-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2008.04a
    • /
    • pp.99-100
    • /
    • 2008
  • This paper proposes a Gaussian Mixture Model (GMM) - based speech emotion recognition methods using four feature parameters; 1) Fast Fourier Transform(FFT) spectral entropy, 2) delta FFT spectral entropy, 3) Mel-frequency Filter Bank (MFB) spectral entropy, and 4) delta MFB spectral entropy. In addition, we use four emotions in a speech database including anger, sadness, happiness, and neutrality. We perform speech emotion recognition experiments using each pre-defined emotion and gender. The experimental results show that the proposed emotion recognition using FFT spectral-based entropy and MFB spectral-based entropy performs better than existing emotion recognition based on GMM using energy, Zero Crossing Rate (ZCR), Linear Prediction Coefficient (LPC), and pitch parameters. In experimental Results, we attained a maximum recognition rate of 75.1% when we used MFB spectral entropy and delta MFB spectral entropy.

  • PDF

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Emotion Prediction System using Movie Script and Cinematography (영화 시나리오와 영화촬영기법을 이용한 감정 예측 시스템)

  • Kim, Jinsu
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.12
    • /
    • pp.33-38
    • /
    • 2018
  • Recently, we are trying to predict the emotion from various information and to convey the emotion information that the supervisor wants to inform the audience. In addition, audiences intend to understand the flow of emotions through various information of non-dialogue parts, such as cinematography, scene background, background sound and so on. In this paper, we propose to extract emotions by mixing not only the context of scripts but also the cinematography information such as color, background sound, composition, arrangement and so on. In other words, we propose an emotional prediction system that learns and distinguishes various emotional expression techniques into dialogue and non-dialogue regions, contributes to the completeness of the movie, and quickly applies them to new changes. The precision of the proposed system is improved by about 5.1% and 0.4%, and the recall is improved by about 4.3% and 1.6%, respectively, when compared with the modified n-gram and morphological analysis.