• Title/Summary/Keyword: 음악 감정 인식

Search Result 29, Processing Time 0.037 seconds

Music player using emotion classification of facial expressions (얼굴표정을 통한 감정 분류 및 음악재생 프로그램)

  • Yoon, Kyung-Seob;Lee, SangWon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.243-246
    • /
    • 2019
  • 본 논문에서는 감성과 힐링, 머신러닝이라는 주제를 바탕으로 딥러닝을 통한 사용자의 얼굴표정을 인식하고 그 얼굴표정을 기반으로 음악을 재생해주는 얼굴표정 기반의 음악재생 프로그램을 제안한다. 얼굴표정 기반 음악재생 프로그램은 딥러닝 기반의 음악 프로그램으로써, 이미지 인식 분야에서 뛰어난 성능을 보여주고 있는 CNN 모델을 기반으로 얼굴의 표정을 인식할 수 있도록 데이터 학습을 진행하였고, 학습된 모델을 이용하여 웹캠으로부터 사용자의 얼굴표정을 인식하는 것을 통해 사용자의 감정을 추측해낸다. 그 후, 해당 감정에 맞게 감정을 더 증폭시켜줄 수 있도록, 감정과 매칭되는 노래를 재생해주고, 이를 통해, 사용자의 감정이 힐링 및 완화될 수 있도록 도움을 준다.

  • PDF

Design for Mood-Matched Music Based on Deep Learning Emotion Recognition (딥러닝 감정 인식 기반 배경음악 매칭 설계)

  • Chung, Moonsik;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.834-836
    • /
    • 2021
  • 멀티모달 감정인식을 통해 사람의 감정을 정확하게 분류하고, 사람의 감정에 어울리는 음악을 매칭하는 시스템을 설계한다. 멀티모달 감정 인식 방법으로는 IEMOCAP(Interactive Emotional Dyadic Motion Capture) 데이터셋을 활용해 감정을 분류하고, 분류된 감정의 분위기에 맞는 음악을 매칭시키는 시스템을 구축하고자 한다. 유니모달 대비 멀티모달 감정인식의 정확도를 개선한 시스템을 통해 텍스트, 음성, 표정을 포함하고 있는 동영상의 감성 분위기에 적합한 음악 매칭 시스템을 연구한다.

A Study on the Performance of Music Retrieval Based on the Emotion Recognition (감정 인식을 통한 음악 검색 성능 분석)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.247-255
    • /
    • 2015
  • This paper presents a study on the performance of the music search based on the automatically recognized music-emotion labels. As in the other media data, such as speech, image, and video, a song can evoke certain emotions to the listeners. When people look for songs to listen, the emotions, evoked by songs, could be important points to consider. However; very little study has been done on the performance of the music-emotion labels to the music search. In this paper, we utilize the three axes of human music perception (valence, activity, tension) and the five basic emotion labels (happiness, sadness, tenderness, anger, fear) in measuring music similarity for music search. Experiments were conducted on both genre and singer datasets. The search accuracy of the proposed emotion-based music search was up to 75 % of that of the conventional feature-based music search. By combining the proposed emotion-based method with the feature-based method, we achieved up to 14 % improvement of search accuracy.

A Music Recommendation System based on Fuzzy Inference with User Emotion and Environments (사용자 감정 및 환경을 고려한 퍼지추론 기반 음악추천 시스템)

  • 임성수;조성배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.541-543
    • /
    • 2004
  • 인터넷의 대중화로 인하여 인터넷상에 많은 음악 정보가 존재하게 되었다. 이에 따라서 사용자에게 음악 정보를 손쉽게 접근할 수 있게 해주는 서비스뿐만 아니라, 사용자에게 적절한 음악을 추천해주는 서비스의 중요성도 증가하고 있다. 본 논문에서는 사용자의 상황을 인식하고 사용자와의 대화를 통해서 적절한 음악을 추천해주는 인공 DJ를 제안한다 인공 DJ는 센서로부터 실내 온도, 습도, 조도, 소음을 입력받고, 인터넷을 통하여 날씨 정보를 입력받고, 사용자의 감정추론을 위하여 사용자가 입력하는 문장을 분석하여 Activation-Evaluation Space상에서 사용자의 감정을 표시함으로써 사용자의 주변 상황을 인식하고, 사용자의 성향을 파악하여 IF-THEN 규칙을 만들어 대수학적 연산자(algebraic operator)를 통한 퍼지 추론 방법을 이용하여 적절한 음악을 추천한다. 피험자 10명을 대상으로 실시한 설문조사 결과 제안하는 방법이 유용함을 알 수 있었다.

  • PDF

Enhancing Music Recommendation Systems Through Emotion Recognition and User Behavior Analysis

  • Qi Zhang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.177-187
    • /
    • 2024
  • 177-Existing music recommendation systems do not sufficiently consider the discrepancy between the intended emotions conveyed by song lyrics and the actual emotions felt by users. In this study, we generate topic vectors for lyrics and user comments using the LDA model, and construct a user preference model by combining user behavior trajectories reflecting time decay effects and playback frequency, along with statistical characteristics. Empirical analysis shows that our proposed model recommends music with higher accuracy compared to existing models that rely solely on lyrics. This research presents a novel methodology for improving personalized music recommendation systems by integrating emotion recognition and user behavior analysis.

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

Smart Home Automation System Using Emotion and Behavior Recognition (감정과 행동인식을 활용한 스마트홈 자동화 시스템)

  • Lee, Seung-Hui;Lee, Seung-Bin;Ryu, Sang-Uk;Lee, Hye-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.1051-1054
    • /
    • 2021
  • 본 시스템은 홈 CCTV 를 이용해 집 안 사용자의 감정인식, 행동인식을 통해 스마트홈 자동화할 수 있는 시스템으로, 인식된 감정에 맞춰 음악 재생, 조명 조절을 하거나 간단한 동작만으로 IoT 단말을 컨트롤 한다. 또한 실시간으로 IoT 장비들이 동작할 수 있도록 마이크로 서비스 아키텍처로 감정인식과 행동인식 서비스를 설계하여 구현하는 것이 특징이다. 이는 스마트홈 내에서 더 편리하고 가치 있는 집안 환경을 구현할 수 시스템을 제공한다.

Case study of Music & Imagery for Woman with Depression (우울한 내담자를 위한 MI(Music & Imagery) 치료사례)

  • Song, In Ryeong
    • Journal of Music and Human Behavior
    • /
    • v.5 no.1
    • /
    • pp.67-90
    • /
    • 2008
  • This case used MI techniques that give an imagery experience to depressed client's mental resource, and that makes in to verbalism. Also those images are supportive level therapy examples that apply to positive variation. MI is simple word of 'Music and Imagery' with one of psychology cure called GIM(Guided Imagery and Music). It makes client can through to the inner world and search, confront, discern and solve with suitable music. Supportive Level MI is only used from safety level music. Introduction of private session can associate specification feeling, subject, word or image. And those images are guide to positive experience. The First session step of MI program is a prelude that makes concrete goal like first interview. The Second step is a transition that can concretely express about client's story. The third step is induction and music listening. And it helps to associate imagery more easily by used tension relaxation. Also it can search and associate about various imagery from the music. The last step is process that process drawing imagery, talking about personal imagery experience in common with therapist that bring the power by expansion the positive experience. Client A case targets rapport forming(empathy, understanding and support), searching positive recourse(child hood, family), client's emotion and positive support. Music must be used simple tone, repetition melody, steady rhythm and organized by harmony music of what therapist and client's preference. The client used defense mechanism and couldn't control emotion by depression in 1 & 2 sessions. But the result was client A could experience about support and understanding after 3 sessions. After session 4 the client had stable, changed to positive emotion from the negative emotion and found her spontaneous. Therefore, at the session 6, the client recognized that she will have step of positive time at the future. About client B, she established rapport forming(empathy, understanding and support) and searching issues and positive recognition(child hood, family), expression and insight(present, future). The music was comfortable, organizational at the session 1 & 2, but after session 3, its development was getting bigger and the main melody changed variation with high and low of tune. Also it used the classic and romantic music. The client avoids bad personal relations to religious relationship. But at the session 1 & 2, client had supportive experience and empathy because of her favorite, supportive music. After session 3, client B recognized and face to face the present issue. But she had avoidance and face to face of ambivalence. The client B had a experience about emotion change according depression and face to face client's issues After session 4. At the session 5 & 6, client tried to have will power of healthy life and fairly attitude, train mental power and solution attitude in the future. On this wise, MI program had actuality and clients' issues solution more than GIM program. MI can solute the issue by client's based issue without approach to unconsciousness like GIM. Especially it can use variety music and listening time is shorter than GIM and structuralize. Also can express client's emotion very well. So it can use corrective and complement MI program to children, adolescent and adult.

  • PDF

On the Importance of Tonal Features for Speech Emotion Recognition (음성 감정인식에서의 톤 정보의 중요성 연구)

  • Lee, Jung-In;Kang, Hong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.18 no.5
    • /
    • pp.713-721
    • /
    • 2013
  • This paper describes an efficiency of chroma based tonal features for speech emotion recognition. As the tonality caused by major or minor keys affects to the perception of musical mood, so the speech tonality affects the perception of the emotional states of spoken utterances. In order to justify this assertion with respect to tonality and emotion, subjective hearing tests are carried out by using synthesized signals generated from chroma features, and consequently show that the tonality contributes especially to the perception of the negative emotion such as anger and sad. In automatic emotion recognition tests, the modified chroma-based tonal features are shown to produce noticeable improvement of accuracy when they are supplemented to the conventional log-frequency power coefficient (LFPC)-based spectral features.

Real-time Background Music System for Immersive Dialogue in Metaverse based on Dialogue Emotion (메타버스 대화의 몰입감 증진을 위한 대화 감정 기반 실시간 배경음악 시스템 구현)

  • Kirak Kim;Sangah Lee;Nahyeon Kim;Moonryul Jung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.29 no.4
    • /
    • pp.1-6
    • /
    • 2023
  • To enhance immersive experiences for metaverse environements, background music is often used. However, the background music is mostly pre-matched and repeated which might occur a distractive experience to users as it does not align well with rapidly changing user-interactive contents. Thus, we implemented a system to provide a more immersive metaverse conversation experience by 1) developing a regression neural network that extracts emotions from an utterance using KEMDy20, the Korean multimodal emotion dataset 2) selecting music corresponding to the extracted emotions from an utterance by the DEAM dataset where music is tagged with arousal-valence levels 3) combining it with a virtual space where users can have a real-time conversation with avatars.