• Title/Summary/Keyword: Music Key

Search Result 110, Processing Time 0.018 seconds

Music Key Identification using Chroma Features and Hidden Markov Models

  • Kanyange, Pamela;Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.9
    • /
    • pp.1502-1508
    • /
    • 2017
  • A musical key is a fundamental concept in Western music theory. It is a collective characterization of pitches and chords that together create a musical perception of the entire piece. It is based on a group of pitches in a scale with which a music is constructed. Each key specifies the set of seven primary chromatic notes that are used out of the twelve possible notes. This paper presents a method that identifies the key of a song using Hidden Markov Models given a sequence of chroma features. Given an input song, a sequence of chroma features are computed. It is then classified into one of the 24 keys using a discrete Hidden Markov Models. The proposed method can help musicians and disc-jockeys in mixing a segment of tracks to create a medley. When tested on 120 songs, the success rate of the music key identification reached around 87.5%.

Implementation of Melody Generation Model Through Weight Adaptation of Music Information Based on Music Transformer (Music Transformer 기반 음악 정보의 가중치 변형을 통한 멜로디 생성 모델 구현)

  • Seunga Cho;Jaeho Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.217-223
    • /
    • 2023
  • In this paper, we propose a new model for the conditional generation of music, considering key and rhythm, fundamental elements of music. MIDI sheet music is converted into a WAV format, which is then transformed into a Mel Spectrogram using the Short-Time Fourier Transform (STFT). Using this information, key and rhythm details are classified by passing through two Convolutional Neural Networks (CNNs), and this information is again fed into the Music Transformer. The key and rhythm details are combined by differentially multiplying the weights and the embedding vectors of the MIDI events. Several experiments are conducted, including a process for determining the optimal weights. This research represents a new effort to integrate essential elements into music generation and explains the detailed structure and operating principles of the model, verifying its effects and potentials through experiments. In this study, the accuracy for rhythm classification reached 94.7%, the accuracy for key classification reached 92.1%, and the Negative Likelihood based on the weights of the embedding vector resulted in 3.01.

Music Recognition Using Audio Fingerprint: A Survey (오디오 Fingerprint를 이용한 음악인식 연구 동향)

  • Lee, Dong-Hyun;Lim, Min-Kyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.77-87
    • /
    • 2012
  • Interest in music recognition has been growing dramatically after NHN and Daum released their mobile applications for music recognition in 2010. Methods in music recognition based on audio analysis fall into two categories: music recognition using audio fingerprint and Query-by-Singing/Humming (QBSH). While music recognition using audio fingerprint receives music as its input, QBSH involves taking a user-hummed melody. In this paper, research trends are described for music recognition using audio fingerprint, focusing on two methods: one based on fingerprint generation using energy difference between consecutive bands and the other based on hash key generation between peak points. Details presented in the representative papers of each method are introduced.

A Study on the Trend of Korean Pop Music Preference Through Digital Music Market (디지털 음악 시장을 통해 본 한국 대중가요 선호경향에 관한 연구)

  • Chung, Ji-Yun;Kim, Myoung-Jun
    • Journal of Digital Contents Society
    • /
    • v.18 no.6
    • /
    • pp.1025-1032
    • /
    • 2017
  • Recently the domestic popular song market has been growing mainly in digital sound sources. As a result of analyzing the top 100 music charts from 2012 to 2016 through digital sound sources and musical scores, the average annual BPM has fallen by 11.26 over five years. Every year, The style of music has diversified every year, and the proportion of Hip-hop has doubled from 8.5% in 2012 to 17.8% in 2015. Dance music and ballads have a high preference rate, but the relationship is inversely proportional. Singer composition was inversely proportional to the ratio of female solo to male group. Especially, the relationship between BPM and the Major/Minor key is that 81.42% for slow tempo songs is Major key and 53.85% for fast tempo songs is minor key. In the case of TV drama OST, the solo singer 's music was preferred, the music style was 80% pop and 20% ballad.

An fMRI Study on the Differences in the Brain Regions Activated by an Identical Audio-Visual Clip Using Major and Minor Key Arrangements (동일한 영상자극을 이용한 장조음악과 단조음악에 의해 유발된 뇌 활성화의 차이 : fMRI 연구)

  • Lee, Chang-Kyu;Eum, Young-Ji;Kim, Yeon-Kyu;Watanuki, Shigeki;Sohn, Jin-Hun
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.109-112
    • /
    • 2009
  • The purpose of this study was to examine the differences in the brain activation evoked by music arranged in major and minor key used with an identical motion film during the fMRI testing. A part of the audio-visual combinations composed by Iwamiya and Sano were used for the study stimuli. This audio- visual clip was originally developed by combining a small motion segment of the animation "The Snowman" and music arranged in both major and minor key from the original jazz music "Avalon" rewritten in a classical style. Twenty-seven Japanese male graduate and undergraduate students participated in the study. Brain regions more activated by the major key than the minor key when presented with the identical motion film were the left cerebellum, the right fusiform gyrus, the right superior occipital, the left superior orbito frontal, the right pallidum, the left precuneus, and the bilateral thalamus. On the other hand, brain regions more activated by the minor key than the major key when presented with the identical motion film were the right medial frontal, the left inferior orbito frontal, the bilateral superior parietal, the left postcentral, and the right precuneus. The study showed a difference in brain regions activated between the two different stimulus (i.e., major key and minor key) controlling for the visual aspect of the experiment. These findings imply that our brain systematically generates differently in the way it processes music written in major and minor key(Supported by the User Science Institute of Kyushu University, Japan and the Korea Science and Engineering Foundation).

  • PDF

ENGLISH RESTRUCTURING AND A USE OF MUSIC IN TEACHING ENGLISH PRONUNCIATION

  • Kim, Key-Seop
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.117-134
    • /
    • 2000
  • Kim, Key-Seop(2000). English Restructuring and A Use of Music in Teaching English Pronunciation. JSEP 2000 voU This study has two-fold aims: one is to clarify the restructuring of English in utterance, and the other is to relate it to teaching English pronunciation for listening and speaking with a use of music and song by suggesting a model of 10-15 minute pronunciation class syllabus for every period in class. Generally, English utterances are restructured by stress-timed rhythm, irrespective of syntactic boundaries. So the rhythmic units are arranged in isochronous groups, of which the making is to attach clitic(s) to a host or head often leftwards and sometimes rightwards, which results in linking, contraction, reduction, sound change and rhythm adjustment in utterance, just as in music and song. With English restructuring focused on, a model of English pronunciation class syllabus is proposed to be put forward in class for every period of a lesson or unit. It tries to relate the focused factor(s) in pronunciation to the integrated, with teaching techniques and music made use of.

  • PDF

The experimental study on the influence of chamber music teaching on the mental health of college students in music universities

  • Wu, Tianyi
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.277-285
    • /
    • 2022
  • Purpose of the study To study the effects of teaching chamber music courses on the mental health of college students in music schools. The key to the results is as follows. There was a significant difference in the total level of mental health between the experimental and control classes after the experiment. The total level of mental health of male and female college students in the experimental class had significant differences after the experiment, respectively. There was no significant difference in the ten factors of scl-90 in the control class before and after the experiment, while there was a significant difference. in the ten factors of scl-90 in the experimental class before and after the experiment. The experimental teaching of chamber music courses improves the mental health level of female college students better than male college students. We have come to understand Teaching chamber music courses can significantly improve the mental health of college students in music schools.

Successful Business Model on Music Industry: SK Telecom MelOn (음악 산업에 있어 성공적 비즈니스 모델인 SK Telecom의 멜론(MelOn))

  • Yoo, Pil Hwa;Lee, Sukekyu;Kim, Kyoungsik
    • Asia Marketing Journal
    • /
    • v.8 no.3
    • /
    • pp.141-159
    • /
    • 2006
  • On November 16th 2004, SK Telecom Co., Ltd. launched 'MelOn' , an ubiquitous music service, into the first time in the world. It, new business model in Korea, was a great success in domestic music industry. Its key success factors are not only differential features of service but also the superiority of technical know-how, customer-oriented spirit, and marketing mix activities compared to many competitors. The case shows how MelOn developed and then introduces what on-line music service is identifying the key success factors and presenting visible outcomes. Furthermore discussing its business implications and future challenges.

  • PDF

Determining Key Features of Recognition Korean Traditional Music Using Spectrogram

  • Kim Jae Chun;Kwak Kyung Sup
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.2E
    • /
    • pp.67-70
    • /
    • 2005
  • To realize a traditional music recognition system, some characteristics pertinent to Far East Asian music should be found. Using Spectrogram, some distinct attributes of Korean traditional music are surveyed. Frequency distribution, beat cycle and frequency energy intensity within samples have distinct characteristics of their own. Experiment is done for pre-experimentation to realize Korean traditional music recognition system. Using characteristics of Korean traditional music, $94.5\%$ of classification accuracy is acquired. As Korea, Japan and China have the same musical roots, both in instruments and playing style, analyzing Korean traditional music can be helpful in the understanding of Far East Asian traditional music.

Extraction of Chord and Tempo from Polyphonic Music Using Sinusoidal Modeling

  • Kim, Do-Hyoung;Chung, Jae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.4E
    • /
    • pp.141-149
    • /
    • 2003
  • As music of digital form has been widely used, many people have been interested in the automatic extraction of natural information of music itself, such as key of a music, chord progression, melody progression, tempo, etc. Although some studies have been tried, consistent and reliable results of musical information extraction had not been achieved. In this paper, we propose a method to extract chord and tempo information from general polyphonic music signals. Chord can be expressed by combination of some musical notes and those notes also consist of some frequency components individually. Thus, it is necessary to analyze the frequency components included in musical signal for the extraction of chord information. In this study, we utilize a sinusoidal modeling, which uses sinusoids corresponding to frequencies of musical tones, and show reliable chord extraction results of sinusoidal modeling. We could also find that the tempo of music, which is the one of remarkable feature of music signal, interactively supports the chord extraction idea, if used together. The proposed scheme of musical feature extraction is able to be used in many application fields, such as digital music services using queries of musical features, the operation of music database, and music players mounting chord displaying function, etc.