• Title/Summary/Keyword: Music Generation

Search Result 111, Processing Time 0.026 seconds

A Survey on the Use of Music by the Baby Boomer Generation (베이비부머 세대의 음악활용에 대한 조사연구)

  • Jeon, So-Won;Park, Hye-Young
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.37-46
    • /
    • 2020
  • The purpose of this survey was to examine the trend, purpose, and overall needs on the use of music by the baby boomer generation. On 87 participants aged from 57 to 66 living in five cities of Korea, online and offline surveys were conducted. As a result, first, the baby boomer generation mostly listened to popular music on their mobile phones when taking a rest alone, and it has been confirmed that the purpose of music use was for happiness, hobby, confidence, and fulfillment in order. Second, there was shown that the baby boomer generation hope to do singing and lyric-discussion in a club setting. Third, for the purpose of music use according to genders, there was found that female participants more significantly focused on volunteering than male. It is meaningful that this study could provide basic data to develop music programs considering characteristics of the baby boomer generation.

Automatic Music-Story Video Generation Using Music Files and Photos in Automobile Multimedia System (자동차 멀티미디어 시스템에서의 사진과 음악을 이용한 음악스토리 비디오 자동생성 기술)

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.5
    • /
    • pp.80-86
    • /
    • 2010
  • This paper presents automated music story video generation technique as one of entertainment features that is equipped in multimedia system of the vehicle. The automated music story video generation is a system that automatically creates stories to accompany musics with photos stored in user's mobile phone by connecting user's mobile phone with multimedia systems in vehicles. Users watch the generated music story video at the same time. while they hear the music according to mood. The performance of the automated music story video generation is measured by accuracies of music classification, photo classification, and text-keyword extraction, and results of user's MOS-test.

Implementation of Melody Generation Model Through Weight Adaptation of Music Information Based on Music Transformer (Music Transformer 기반 음악 정보의 가중치 변형을 통한 멜로디 생성 모델 구현)

  • Seunga Cho;Jaeho Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.217-223
    • /
    • 2023
  • In this paper, we propose a new model for the conditional generation of music, considering key and rhythm, fundamental elements of music. MIDI sheet music is converted into a WAV format, which is then transformed into a Mel Spectrogram using the Short-Time Fourier Transform (STFT). Using this information, key and rhythm details are classified by passing through two Convolutional Neural Networks (CNNs), and this information is again fed into the Music Transformer. The key and rhythm details are combined by differentially multiplying the weights and the embedding vectors of the MIDI events. Several experiments are conducted, including a process for determining the optimal weights. This research represents a new effort to integrate essential elements into music generation and explains the detailed structure and operating principles of the model, verifying its effects and potentials through experiments. In this study, the accuracy for rhythm classification reached 94.7%, the accuracy for key classification reached 92.1%, and the Negative Likelihood based on the weights of the embedding vector resulted in 3.01.

Music Generation Method by DNA as a Game Background Music (DNA염기배열에 의한 게임 배경 음악 생성방법)

  • Park, Young-B.;Hwang, Cheol-Ho
    • Journal of Korea Game Society
    • /
    • v.1 no.1
    • /
    • pp.88-93
    • /
    • 2001
  • It is getting easier to make copy of Digital Media. And, illegal copy of digital media causes an infringement of copyright. As a result, it is getting hard to find good game background music. In this study, Auto music generation method by DNA is proposed. Through this method, the game background music can be provided very easily. Since thousands forms of DNA have been found already, we can have thousands of game background music through this method.

  • PDF

A Study on the amusement Experience of Mobile Music Play in the MZ Generation (MZ세대의 모바일 음악재생에 대한 유희적 경험 연구)

  • Lee, Ji-Su;Choe, Jong-Hoon
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.177-183
    • /
    • 2021
  • Unlike the value-oriented tendency of the older generation, the MZ generation, which is emerging as a new consumer, values individual happiness and satisfaction by recognizing consumption as a kind of play, and clearly reveals its playful characteristics compared to other generations. Therefore, this study identified users' needs for interaction elements that can give a playful stimulus to the music player UI, a key function of the music streaming app, one of the universal play activities of the MZ generation. Through previous studies, the concept and characteristics of microinteraction and gamification that can be used as play elements and the relationship between play and art as well as playful characteristics of the MZ generation were summarized. In addition, the current status was identified through case analysis of the existing music app's music player UI and user interviews were conducted through contextual inquiry methods. Afterwards, I was able to identify the positive need to apply playful elements to the music player UI through analysis of user behavior patterns, and based on this, I discovered the possibility of providing an immersive music listening experience through playful interaction.

Music Recognition Using Audio Fingerprint: A Survey (오디오 Fingerprint를 이용한 음악인식 연구 동향)

  • Lee, Dong-Hyun;Lim, Min-Kyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.77-87
    • /
    • 2012
  • Interest in music recognition has been growing dramatically after NHN and Daum released their mobile applications for music recognition in 2010. Methods in music recognition based on audio analysis fall into two categories: music recognition using audio fingerprint and Query-by-Singing/Humming (QBSH). While music recognition using audio fingerprint receives music as its input, QBSH involves taking a user-hummed melody. In this paper, research trends are described for music recognition using audio fingerprint, focusing on two methods: one based on fingerprint generation using energy difference between consecutive bands and the other based on hash key generation between peak points. Details presented in the representative papers of each method are introduced.

Application and Research of Monte Carlo Sampling Algorithm in Music Generation

  • MIN, Jun;WANG, Lei;PANG, Junwei;HAN, Huihui;Li, Dongyang;ZHANG, Maoqing;HUANG, Yantai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3355-3372
    • /
    • 2022
  • Composing music is an inspired yet challenging task, in that the process involves many considerations such as assigning pitches, determining rhythm, and arranging accompaniment. Algorithmic composition aims to develop algorithms for music composition. Recently, algorithmic composition using artificial intelligence technologies received considerable attention. In particular, computational intelligence is widely used and achieves promising results in the creation of music. This paper attempts to provide a survey on the music generation based on the Monte Carlo (MC) algorithm. First, transform the MIDI music format files to digital data. Among these data, use the logistic fitting method to fit the time series, obtain the time distribution regular pattern. Except for time series, the converted data also includes duration, pitch, and velocity. Second, using MC simulation to deal with them summed up their distribution law respectively. The two main control parameters are the value of discrete sampling and standard deviation. Processing the above parameters and converting the data to MIDI file, then compared with the output generated by LSTM neural network, evaluate the music comprehensively.

Seeking for Underlying Meaning of the 'house' and Characteristics in Music Video - Analyzing Seotaiji and Boys and BTS Music Video in Perspective of Generation - ( 뮤직비디오에 나타난 '집'의 의미와 성격 - 서태지와 아이들, 방탄소년단 작품에 대한 세대론적 접근 -)

  • Kil, Hye Bin;Ahn, Soong Beum
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.5
    • /
    • pp.24-34
    • /
    • 2019
  • This study compares in depth the song performed by two groups, one by 'Seo Taiji And His Boys'(X Generation) and the other by 'BTS'(C Generation) based on the discourse about the 'X Generation' in the 1990s and the 'C Generation' in the 2010s. It will specifically focus on the nature of 'home' that has great significance in the music video and will find the sociocultural meaning of it. Based on the analysis, the original performance by 'Seo Taiji and The Boys' demonstrated the vertical structure of enlightenment and discipline and narrated the story with the plot of 'maturity'. The meaning of 'home' in the original version shifts from a target of resistance to a subject of internalization. The remake music video of BTS demonstrated a horizontal structure of empathy and solidarity and narrated the story with the plot of 'pursuit/discovery'. The 'home' here can be considered the life itself of a person who maintains one's self identity.

Alternative Music - Ambiguity of Genre & Beat Generation (얼터너티브 음악 - 장르의 모호함과 비트 제너레이션)

  • Kim, Sung Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.9
    • /
    • pp.4212-4217
    • /
    • 2013
  • Since genre of popular music began to classified, Alternative Music has been music form with most complex and diverse sub genre. This paper focused on mostly Alternative Rock, which began to attract attention since the 1990's, and analysed the cause for ambiguity of this genre. Therefore we can examine the inevitable limitation of traditional - based on musical form only - classification.

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.