• Title/Summary/Keyword: Emotion Computing

Search Result 104, Processing Time 0.026 seconds

Analytical Research on Knowledge Production, Knowledge Structure, and Networking in Affective Computing (Affective Computing 분야의 지식생산, 지식구조와 네트워킹에 관한 분석 연구)

  • Oh, Jee-Sun;Back, Dan-Bee;Lee, Duk-Hee
    • Science of Emotion and Sensibility
    • /
    • v.23 no.4
    • /
    • pp.61-72
    • /
    • 2020
  • Social problems, such as economic instability, aging population, heightened competition, and changes in personal values, might become more serious in the near future. Affective computing has received much attention in the scholarly community as a possible solution to potential social problems. Accordingly, we examined domestic and global knowledge structure, major keywords, current research status, international research collaboration, and network for each major keyword, focusing on keywords related to affective computing. We searched for articles on a specialized academic database (Scopus) using major keywords and carried out bibliometric and network analyses. We found that China and the United States (U.S.) have been active in producing knowledge on affective computing, whereas South Korea lags well behind at around 10%. Major keywords surrounding affective computing include computing, processing, affective analysis, research, user modeling categorizing recognitions, and psychological analysis. In terms of international research collaboration structure, China and the U.S. form the largest cluster, whereas other countries like the United Kingdom, Germany, Switzerland, Spain, and Canada have been strong collaborators as well. Contrastingly, South Korea's research has not been diverse and has not been very successful in producing research outcomes. For the advancement of affective computing research in South Korea, the present study suggests strengthening international collaboration with major countries, including the U.S. and China and diversifying its research partners.

A Design and Implementation of Music & Image Retrieval Recommendation System based on Emotion (감성기반 음악.이미지 검색 추천 시스템 설계 및 구현)

  • Kim, Tae-Yeun;Song, Byoung-Ho;Bae, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.73-79
    • /
    • 2010
  • Emotion intelligence computing is able to processing of human emotion through it's studying and adaptation. Also, Be able more efficient to interaction of human and computer. As sight and hearing, music & image is constitute of short time and continue for long. Cause to success marketing, understand-translate of humanity emotion. In this paper, Be design of check system that matched music and image by user emotion keyword(irritability, gloom, calmness, joy). Suggested system is definition by 4 stage situations. Then, Using music & image and emotion ontology to retrieval normalized music & image. Also, A sampling of image peculiarity information and similarity measurement is able to get wanted result. At the same time, Matched on one space through pared correspondence analysis and factor analysis for classify image emotion recognition information. Experimentation findings, Suggest system was show 82.4% matching rate about 4 stage emotion condition.

Social Network Based Music Recommendation System (소셜네트워크 기반 음악 추천시스템)

  • Park, Taesoo;Jeong, Ok-Ran
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.133-141
    • /
    • 2015
  • Mass multimedia contents are shared through various social media servies including social network service. As social network reveals user's current situation and interest, highly satisfactory personalized recommendation can be made when such features are applied to the recommendation system. In addition, classifying the music by emotion and using analyzed information about user's recent emotion or current situation by analyzing user's social network, it will be useful upon recommending music to the user. In this paper, we propose a music recommendation method that makes an emotion model to classify the music, classifies the music according to the emotion model, and extracts user's current emotional state represented on the social network to recommend music, and evaluates the validity of our method through experiments.

Textile image retrieval integrating contents, emotion and metadata (내용, 감성, 메타데이터의 결합을 이용한 텍스타일 영상 검색)

  • Lee, Kyoung-Mi;Park, U-Chang;Lee, Eun-Ok;Kwon, Hye-Young;Cha, Eun-MI
    • Journal of Internet Computing and Services
    • /
    • v.9 no.5
    • /
    • pp.99-108
    • /
    • 2008
  • This paper proposes an image retrieval system which integrates metadata, contents, and emotions in textile images. First, the proposed system searches images using metadata. Among searched images, the system retrieves similar images based on color histogram, color sketch, and emotion histogram. To extract emotion features, this paper uses emotion colors which was proposed on 160 emotion words by H. Nagumo. To enhance the user's convenience, the proposed textile image retrieval system provides additional functions as like enlarging an image, viewing color histogram, viewing color sketch, and viewing repeated patterns.

  • PDF

Adaptive Speech Emotion Recognition Framework Using Prompted Labeling Technique (프롬프트 레이블링을 이용한 적응형 음성기반 감정인식 프레임워크)

  • Bang, Jae Hun;Lee, Sungyoung
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.160-165
    • /
    • 2015
  • Traditional speech emotion recognition techniques recognize emotions using a general training model based on the voices of various people. These techniques can not consider personalized speech character exactly. Therefore, the recognized results are very different to each person. This paper proposes an adaptive speech emotion recognition framework made from user's' immediate feedback data using a prompted labeling technique for building a personal adaptive recognition model and applying it to each user in a mobile device environment. The proposed framework can recognize emotions from the building of a personalized recognition model. The proposed framework was evaluated to be better than the traditional research techniques from three comparative experiment. The proposed framework can be applied to healthcare, emotion monitoring and personalized service.

Music Similarity Search Based on Music Emotion Classification

  • Kim, Hyoung-Gook;Kim, Jang-Heon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.3E
    • /
    • pp.69-73
    • /
    • 2007
  • This paper presents an efficient algorithm to retrieve similar music files from a large archive of digital music database. Users are able to navigate and discover new music files which sound similar to a given query music file by searching for the archive. Since most of the methods for finding similar music files from a large database requires on computing the distance between a given query music file and every music file in the database, they are very time-consuming procedures. By measuring the acoustic distance between the pre-classified music files with the same type of emotion, the proposed method significantly speeds up the search process and increases the precision in comparison with the brute-force method.

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.

Speech Emotion Recognition with SVM, KNN and DSVM

  • Hadhami Aouani ;Yassine Ben Ayed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.40-48
    • /
    • 2023
  • Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: Zero Crossing Rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.

Speech Emotion Recognition Based on Deep Networks: A Review (딥네트워크 기반 음성 감정인식 기술 동향)

  • Mustaqeem, Mustaqeem;Kwon, Soonil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.331-334
    • /
    • 2021
  • In the latest eras, there has been a significant amount of development and research is done on the usage of Deep Learning (DL) for speech emotion recognition (SER) based on Convolutional Neural Network (CNN). These techniques are usually focused on utilizing CNN for an application associated with emotion recognition. Moreover, numerous mechanisms are deliberated that is based on deep learning, meanwhile, it's important in the SER-based human-computer interaction (HCI) applications. Associating with other methods, the methods created by DL are presenting quite motivating results in many fields including automatic speech recognition. Hence, it appeals to a lot of studies and investigations. In this article, a review with evaluations is illustrated on the improvements that happened in the SER domain though likewise arguing the existing studies that are existence SER based on DL and CNN methods.

Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans (가상 인간의 감정 표현 인식을 위한 비언어적 다중모달 영향 분석)

  • Kim, Jin Ok
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2012
  • Virtual human used as HCI in digital contents expresses his various emotions across modalities like facial expression and body posture. However, few studies considered combinations of such nonverbal multimodal in emotion perception. Computational engine models have to consider how a combination of nonverbal modal like facial expression and body posture will be perceived by users to implement emotional virtual human, This paper proposes the impacts of nonverbal multimodal in design of emotion expressed virtual human. First, the relative impacts are analysed between different modals by exploring emotion recognition of modalities for virtual human. Then, experiment evaluates the contribution of the facial and postural congruent expressions to recognize basic emotion categories, as well as the valence and activation dimensions. Measurements are carried out to the impact of incongruent expressions of multimodal on the recognition of superposed emotions which are known to be frequent in everyday life. Experimental results show that the congruence of facial and postural expression of virtual human facilitates perception of emotion categories and categorical recognition is influenced by the facial expression modality, furthermore, postural modality are preferred to establish a judgement about level of activation dimension. These results will be used to implementation of animation engine system and behavior syncronization for emotion expressed virtual human.