• Title/Summary/Keyword: Learning Emotion

Search Result 401, Processing Time 0.026 seconds

Design of a Mirror for Fragrance Recommendation based on Personal Emotion Analysis (개인의 감성 분석 기반 향 추천 미러 설계)

  • Hyeonji Kim;Yoosoo Oh
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.4
    • /
    • pp.11-19
    • /
    • 2023
  • The paper proposes a smart mirror system that recommends fragrances based on user emotion analysis. This paper combines natural language processing techniques such as embedding techniques (CounterVectorizer and TF-IDF) and machine learning classification models (DecisionTree, SVM, RandomForest, SGD Classifier) to build a model and compares the results. After the comparison, the paper constructs a personal emotion-based fragrance recommendation mirror model based on the SVM and word embedding pipeline-based emotion classifier model with the highest performance. The proposed system implements a personalized fragrance recommendation mirror based on emotion analysis, providing web services using the Flask web framework. This paper uses the Google Speech Cloud API to recognize users' voices and use speech-to-text (STT) to convert voice-transcribed text data. The proposed system provides users with information about weather, humidity, location, quotes, time, and schedule management.

Image Recognition based on Adaptive Deep Learning (적응적 딥러닝 학습 기반 영상 인식)

  • Kim, Jin-Woo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.1
    • /
    • pp.113-117
    • /
    • 2018
  • Human emotions are revealed by various factors. Words, actions, facial expressions, attire and so on. But people know how to hide their feelings. So we can not easily guess its sensitivity using one factor. We decided to pay attention to behaviors and facial expressions in order to solve these problems. Behavior and facial expression can not be easily concealed without constant effort and training. In this paper, we propose an algorithm to estimate human emotion through combination of two results by gradually learning human behavior and facial expression with little data through the deep learning method. Through this algorithm, we can more comprehensively grasp human emotions.

Ranking Tag Pairs for Music Recommendation Using Acoustic Similarity

  • Lee, Jaesung;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.159-165
    • /
    • 2015
  • The need for the recognition of music emotion has become apparent in many music information retrieval applications. In addition to the large pool of techniques that have already been developed in machine learning and data mining, various emerging applications have led to a wealth of newly proposed techniques. In the music information retrieval community, many studies and applications have concentrated on tag-based music recommendation. The limitation of music emotion tags is the ambiguity caused by a single music tag covering too many subcategories. To overcome this, multiple tags can be used simultaneously to specify music clips more precisely. In this paper, we propose a novel technique to rank the proper tag combinations based on the acoustic similarity of music clips.

A Survey on Image Emotion Recognition

  • Zhao, Guangzhe;Yang, Hanting;Tu, Bing;Zhang, Lei
    • Journal of Information Processing Systems
    • /
    • v.17 no.6
    • /
    • pp.1138-1156
    • /
    • 2021
  • Emotional semantics are the highest level of semantics that can be extracted from an image. Constructing a system that can automatically recognize the emotional semantics from images will be significant for marketing, smart healthcare, and deep human-computer interaction. To understand the direction of image emotion recognition as well as the general research methods, we summarize the current development trends and shed light on potential future research. The primary contributions of this paper are as follows. We investigate the color, texture, shape and contour features used for emotional semantics extraction. We establish two models that map images into emotional space and introduce in detail the various processes in the image emotional semantic recognition framework. We also discuss important datasets and useful applications in the field such as garment image and image retrieval. We conclude with a brief discussion about future research trends.

LSTM Hyperparameter Optimization for an EEG-Based Efficient Emotion Classification in BCI (BCI에서 EEG 기반 효율적인 감정 분류를 위한 LSTM 하이퍼파라미터 최적화)

  • Aliyu, Ibrahim;Mahmood, Raja Majid;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1171-1180
    • /
    • 2019
  • Emotion is a psycho-physiological process that plays an important role in human interactions. Affective computing is centered on the development of human-aware artificial intelligence that can understand and regulate emotions. This field of study is also critical as mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction are associated with emotion. Despite the efforts in emotions recognition and emotion detection from nonstationary, detecting emotions from abnormal EEG signals requires sophisticated learning algorithms because they require a high level of abstraction. In this paper, we investigated LSTM hyperparameters for an optimal emotion EEG classification. Results of several experiments are hereby presented. From the results, optimal LSTM hyperparameter configuration was achieved.

ME-based Emotion Recognition Model (ME 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Geun;Whang, Min-Cheol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.985-987
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using individual average difference. In order to accurately recognize an user' s emotion, the proposed model utilizes the difference between the average of the given input physiological signals and the average of each emotion state' signals rather than only the input signal. For the purpose of alleviating data sparse -ness, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of physiological signals based on a second rather than the longer total emotion response time. With the aim of easily constructing the model, it utilizes a simple average difference calculation technique and a maximum entropy model, one of well-known machine learning techniques.

  • PDF

A Study on Visual Emotion Classification using Balanced Data Augmentation (균형 잡힌 데이터 증강 기반 영상 감정 분류에 관한 연구)

  • Jeong, Chi Yoon;Kim, Mooseop
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.880-889
    • /
    • 2021
  • In everyday life, recognizing people's emotions from their frames is essential and is a popular research domain in the area of computer vision. Visual emotion has a severe class imbalance in which most of the data are distributed in specific categories. The existing methods do not consider class imbalance and used accuracy as the performance metric, which is not suitable for evaluating the performance of the imbalanced dataset. Therefore, we proposed a method for recognizing visual emotion using balanced data augmentation to address the class imbalance. The proposed method generates a balanced dataset by adopting the random over-sampling and image transformation methods. Also, the proposed method uses the Focal loss as a loss function, which can mitigate the class imbalance by down weighting the well-classified samples. EfficientNet, which is the state-of-the-art method for image classification is used to recognize visual emotion. We compare the performance of the proposed method with that of conventional methods by using a public dataset. The experimental results show that the proposed method increases the F1 score by 40% compared with the method without data augmentation, mitigating class imbalance without loss of classification accuracy.

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

Automated Emotional Tagging of Lifelog Data with Wearable Sensors (웨어러블 센서를 이용한 라이프로그 데이터 자동 감정 태깅)

  • Park, Kyung-Wha;Kim, Byoung-Hee;Kim, Eun-Sol;Jo, Hwi-Yeol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.6
    • /
    • pp.386-391
    • /
    • 2017
  • In this paper, we propose a system that automatically assigns user's experience-based emotion tags from wearable sensor data collected in real life. Four types of emotional tags are defined considering the user's own emotions and the information which the user sees and listens to. Based on the collected wearable sensor data from multiple sensors, we have trained a machine learning-based tagging system that combines the known auxiliary tools from the existing affective computing research and assigns emotional tags. In order to show the usefulness of this multi-modality-based emotion tagging system, quantitative and qualitative comparison with the existing single-modality-based emotion recognition approach are performed.

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.