• Title/Summary/Keyword: Based Music

Search Result 1,314, Processing Time 0.026 seconds

Automatic Music Summarization Using Similarity Measure Based on Multi-Level Vector Quantization (다중레벨 벡터양자화 기반의 유사도를 이용한 자동 음악요약)

  • Kim, Sung-Tak;Kim, Sang-Ho;Kim, Hoi-Rin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.39-43
    • /
    • 2007
  • Music summarization refers to a technique which automatically extracts the most important and representative segments in music content. In this paper, we propose and evaluate a technique which provides the repeated part in music content as music summary. For extracting a repeated segment in music content, the proposed algorithm uses the weighted sum of similarity measures based on multi-level vector quantization for fixed-length summary or optimal-length summary. For similarity measures, count-based similarity measure and distance-based similarity measure are proposed. The number of the same codeword and the Mahalanobis distance of features which have same codeword at the same position in segments are used for count-based and distance-based similarity measure, respectively. Fixed-length music summary is evaluated by measuring the overlapping ratio between hand-made repeated parts and automatically generated ones. Optimal-length music summary is evaluated by calculating how much automatically generated music summary includes repeated parts of the music content. From experiments we observed that optimal-length summary could capture the repeated parts in music content more effectively in terms of summary length than fixed-length summary.

HMM-based Music Identification System for Copyright Protection (저작권 보호를 위한 HMM기반의 음악 식별 시스템)

  • Kim, Hee-Dong;Kim, Do-Hyun;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.63-67
    • /
    • 2009
  • In this paper, in order to protect music copyrights, we propose a music identification system which is scalable to the number of pieces of registered music and robust to signal-level variations of registered music. For its implementation, we define the new concepts of 'music word' and 'music phoneme' as recognition units to construct 'music acoustic models'. Then, with these concepts, we apply the HMM-based framework used in continuous speech recognition to identify the music. Each music file is transformed to a sequence of 39-dimensional vectors. This sequence of vectors is represented as ordered states with Gaussian mixtures. These ordered states are trained using Baum-Welch re-estimation method. Music files with a suspicious copyright are also transformed to a sequence of vectors. Then, the most probable music file is identified using Viterbi algorithm through the music identification network. We implemented a music identification system for 1,000 MP3 music files and tested this system with variations in terms of MP3 bit rate and music speed rate. Our proposed music identification system demonstrates robust performance to signal variations. In addition, scalability of this system is independent of the number of registered music files, since our system is based on HMM method.

  • PDF

Development of User Music Recognition System For Online Music Management Service (온라인 음악 관리 서비스를 위한 사용자 음원 인식 시스템 개발)

  • Sung, Bo-Kyung;Ko, Il-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.11
    • /
    • pp.91-99
    • /
    • 2010
  • Recently, recognizing user resource for personalized service has been needed in digital content service fields. Especially, to analyze user taste, recommend music and service music related information need recognition of user music file in case of online music service. Music related information service is offered through recognizing user music based on tag information. Recognition error has grown by weak points like changing and removing of tag information. Techniques of content based user music recognition with music signal itself are researched for solving upper problems. In this paper, we propose user music recognition on the internet by extracted feature from music signal. Features are extracted after suitable preprocessing for structure of content based user music recognition. Recognizing on music server consist of feature form are progressed with extracted feature. Through this, user music can be recognized independently of tag data. 600 music was collected and converted to each 5 music qualities for proving of proposed recognition. Converted 3000 experiment music on this method is used for recognition experiment on music server including 300,000 music. Average of recognition ratio was 85%. Weak points of tag based music recognition were overcome through proposed content based music recognition. Recognition performance of proposed method show a possibility that can be adapt to online music service in practice.

Korean Traditional Music Genre Classification Using Sample and MIDI Phrases

  • Lee, JongSeol;Lee, MyeongChun;Jang, Dalwon;Yoon, Kyoungro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.4
    • /
    • pp.1869-1886
    • /
    • 2018
  • This paper proposes a MIDI- and audio-based music genre classification method for Korean traditional music. There are many traditional instruments in Korea, and most of the traditional songs played using the instruments have similar patterns and rhythms. Although music information processing such as music genre classification and audio melody extraction have been studied, most studies have focused on pop, jazz, rock, and other universal genres. There are few studies on Korean traditional music because of the lack of datasets. This paper analyzes raw audio and MIDI phrases in Korean traditional music, performed using Korean traditional musical instruments. The classified samples and MIDI, based on our classification system, will be used to construct a database or to implement our Kontakt-based instrument library. Thus, we can construct a management system for a Korean traditional music library using this classification system. Appropriate feature sets for raw audio and MIDI phrases are proposed and the classification results-based on machine learning algorithms such as support vector machine, multi-layer perception, decision tree, and random forest-are outlined in this paper.

A Study on the Implementation of the System of Content-based Retrieval of Music Data (내용 기반 음원 검출 시스템 구현에 관한 연구)

  • Hur, Tai-Kwan;Cho, Hwang-Won;Nam, Gi-Pyo;Lee, Jae-Hyun;Lee, Seok-Pil;Park, Sung-Joo;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.11
    • /
    • pp.1581-1592
    • /
    • 2009
  • Recently, we can hear various kinds of music in everywhere and anytime. If a user wants to find the music which was heard before in a street or cafe, but he does not know the title of the music, it is difficult to find it. That is the limitation of previous retrieval system of music data. To overcome these problems, we research a method of content-based retrieval of music data based on the recorded humming, the part of recorded music and the played musical instrument. In this paper, we investigated previous content-based retrieval methods of papers, systems and patents. Based on that, we research a method of content-based retrieval of music data. That is, in case of using the recorded humming and music for query, we extract the frequency information from the recorded humming/music and the stored music data by using FFT. We use a MIDI file in case of query by the played musical instrument. And by using dynamic programming matching, the error caused by the disparity of length between the input source with the stored music data could be reduced.

  • PDF

A Study on the Performance of Music Retrieval Based on the Emotion Recognition (감정 인식을 통한 음악 검색 성능 분석)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.247-255
    • /
    • 2015
  • This paper presents a study on the performance of the music search based on the automatically recognized music-emotion labels. As in the other media data, such as speech, image, and video, a song can evoke certain emotions to the listeners. When people look for songs to listen, the emotions, evoked by songs, could be important points to consider. However; very little study has been done on the performance of the music-emotion labels to the music search. In this paper, we utilize the three axes of human music perception (valence, activity, tension) and the five basic emotion labels (happiness, sadness, tenderness, anger, fear) in measuring music similarity for music search. Experiments were conducted on both genre and singer datasets. The search accuracy of the proposed emotion-based music search was up to 75 % of that of the conventional feature-based music search. By combining the proposed emotion-based method with the feature-based method, we achieved up to 14 % improvement of search accuracy.

Comparative Analysis of and Future Directions for AI-Based Music Composition Programs (인공지능 기반 작곡 프로그램의 비교분석과 앞으로 나아가야 할 방향에 관하여)

  • Eun Ji Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.309-314
    • /
    • 2023
  • This study examines the development and limitations of current artificial intelligence (AI) music composition programs. AI music composition programs have progressed significantly owing to deep learning technology. However, they possess limitations pertaining to the creative aspects of music. In this study, we collect, compare, and analyze information on existing AI-based music composition programs and explore their technical orientation, musical concept, and drawbacks to delineate future directions for AI music composition programs. Furthermore, this study emphasizes the importance of developing AI music composition programs that create "personalized" music, aligning with the era of personalization. Ultimately, for AI-based composition programs, it is critical to extensively research how music, as an output, can touch the listeners and implement appropriate changes. By doing so, AI-based music composition programs are expected to form a new structure in and advance the music industry.

Use of Music Technology in Music Therapy (음악치료에서의 음악테크놀로지 활용)

  • Park, Ye Seul
    • Journal of Music and Human Behavior
    • /
    • v.12 no.2
    • /
    • pp.61-77
    • /
    • 2015
  • The purpose of this study was to investigate music therapists'use and perception of computer-based music technology. Questionnaires were distributed either electronically or in-person to 367 music therapists with credentials. Of the 367 initially distributed questionnaires, 101 were returned and 61 were analyzed after excluding 40 incomplete responses. The survey was comprised of two sections: the use of music technology and perceived importance of music technology in music therapy practice. The results showed that 65.6% of the respondents had used music technology in their clinical practice. The most frequently used type of music technology was Finale, followed by Garage band, and Cubase. With regard to the areas where music technology was used, it was implemented primarily for adolescents for musical or emotional goals, and was applied most frequently as a musical resource. In addition, most respondents showed a positive attitude toward music technology and added that they would need to be trained to use music technology for their clinical practice. These results provide practical information on how music therapists use and perceive computer-based music technology, and its implication for music therapy clinical practice.

Automatic Music Recommendation System based on Music Characteristics

  • Kim, Sang-Ho;Kim, Sung-Tak;Kwon, Suk-Bong;Ji, Mi-Kyong;Kim, Hoi-Rin;Yoon, Jeong-Hyun;Lee, Han-Kyu
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.268-273
    • /
    • 2007
  • In this paper, we present effective methods for automatic music recommendation system which automatically recommend music by signal processing technology. Conventional music recommendation system use users’ music downloading pattern, but the method does not consider acoustic characteristics of music. Sometimes, similarities between music are used to find similar music for recommendation in some method. However, the feature used for calculating similarities is not highly related to music characteristics at the system. Thus, our proposed method use high-level music characteristics such as rhythm pattern, timbre characteristics, and the lyrics. In addition, our proposed method store features of music, which individuals queried, to recommend music based on individual taste. Experiments show the proposed method find similar music more effectively than a conventional method. The experimental results also show that the proposed method could be used for real-time application since the processing time for calculating similarities between music, and recommending music are fast enough to be applicable for commercial purpose.

  • PDF

Music Recognition Using Audio Fingerprint: A Survey (오디오 Fingerprint를 이용한 음악인식 연구 동향)

  • Lee, Dong-Hyun;Lim, Min-Kyu;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.77-87
    • /
    • 2012
  • Interest in music recognition has been growing dramatically after NHN and Daum released their mobile applications for music recognition in 2010. Methods in music recognition based on audio analysis fall into two categories: music recognition using audio fingerprint and Query-by-Singing/Humming (QBSH). While music recognition using audio fingerprint receives music as its input, QBSH involves taking a user-hummed melody. In this paper, research trends are described for music recognition using audio fingerprint, focusing on two methods: one based on fingerprint generation using energy difference between consecutive bands and the other based on hash key generation between peak points. Details presented in the representative papers of each method are introduced.