• Title/Summary/Keyword: music information

Search Result 1,116, Processing Time 0.032 seconds

Development of a Music Score Editor based on MusicXML (MusicXML 기반의 악보 편집기 개발)

  • Khan, Najeeb Ullah;Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.2
    • /
    • pp.77-90
    • /
    • 2014
  • In the past composers used to make music with classical instruments such as piano, violin, guitar, flute, drums, and other well-known tools. With the advent of digital technology many software programs were developed which allow musicians to compose tunes using personal computers. Many file formats were introduced such as NIFF, SMDL and MIDI but none besides MIDI has been successful. Recently MusicXML has emerged as a de-facto standard for the computer representation of music. This paper presents a brief description of the structure of the MusicXML format and describes the development of a music score editor based on MusicXML. We implemented a MusicXML-based score editing software using C# language and a feasibility test showed the efficiency of our proposed method.

Development of Music Recommendation System based on Customer Sentiment Analysis (소비자 감성 분석 기반의 음악 추천 알고리즘 개발)

  • Lee, Seung Jun;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.197-217
    • /
    • 2018
  • Music is one of the most creative act that can express human sentiment with sound. Also, since music invoke people's sentiment to get empathized with it easily, it can either encourage or discourage people's sentiment with music what they are listening. Thus, sentiment is the primary factor when it comes to searching or recommending music to people. Regard to the music recommendation system, there are still lack of recommendation systems that are based on customer sentiment. An algorithm's that were used in previous music recommendation systems are mostly user based, for example, user's play history and playlists etc. Based on play history or playlists between multiple users, distance between music were calculated refer to basic information such as genre, singer, beat etc. It can filter out similar music to the users as a recommendation system. However those methodology have limitations like filter bubble. For example, if user listen to rock music only, it would be hard to get hip-hop or R&B music which have similar sentiment as a recommendation. In this study, we have focused on sentiment of music itself, and finally developed methodology of defining new index for music recommendation system. Concretely, we are proposing "SWEMS" index and using this index, we also extracted "Sentiment Pattern" for each music which was used for this research. Using this "SWEMS" index and "Sentiment Pattern", we expect that it can be used for a variety of purposes not only the music recommendation system but also as an algorithm which used for buildup predicting model etc. In this study, we had to develop the music recommendation system based on emotional adjectives which people generally feel when they listening to music. For that reason, it was necessary to collect a large amount of emotional adjectives as we can. Emotional adjectives were collected via previous study which is related to them. Also more emotional adjectives has collected via social metrics and qualitative interview. Finally, we could collect 134 individual adjectives. Through several steps, the collected adjectives were selected as the final 60 adjectives. Based on the final adjectives, music survey has taken as each item to evaluated the sentiment of a song. Surveys were taken by expert panels who like to listen to music. During the survey, all survey questions were based on emotional adjectives, no other information were collected. The music which evaluated from the previous step is divided into popular and unpopular songs, and the most relevant variables were derived from the popularity of music. The derived variables were reclassified through factor analysis and assigned a weight to the adjectives which belongs to the factor. We define the extracted factors as "SWEMS" index, which describes sentiment score of music in numeric value. In this study, we attempted to apply Case Based Reasoning method to implement an algorithm. Compare to other methodology, we used Case Based Reasoning because it shows similar problem solving method as what human do. Using "SWEMS" index of each music, an algorithm will be implemented based on the Euclidean distance to recommend a song similar to the emotion value which given by the factor for each music. Also, using "SWEMS" index, we can also draw "Sentiment Pattern" for each song. In this study, we found that the song which gives a similar emotion shows similar "Sentiment Pattern" each other. Through "Sentiment Pattern", we could also suggest a new group of music, which is different from the previous format of genre. This research would help people to quantify qualitative data. Also the algorithms can be used to quantify the content itself, which would help users to search the similar content more quickly.

A Study on Performance Analysis of High Resolution DOA Method based on MUSIC (MUSIC을 근간으로 하는 고해상도 DOA 방법의 성능분석에 관한 연구)

  • 이일근;최인경;김영집;강철신
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.2
    • /
    • pp.345-353
    • /
    • 1994
  • This paper proposes a high resolution direction finding method, which is so called the 'averaged MUSIC'. This method uses a new sample array covariance matrix that consists of diagonal components obtained by taking averages of the diagonal component values of the sample covariance matrix for the MUSIC. This paper also shows that the proposed method performs higher resolced direction-of-arrival estimation than the MUSIC in such cases as low signal-to-noise ratio, closed signal sources, and limited number of sensors, based on the statistical analysis.

  • PDF

Implementation of SimMusic Language on Lego Mindstorms NXT (Lego Mindstorms NXT 상에서 SimMusic Language 구현)

  • Shin, Suyong Christina;Heo, Yujeong;Kim, Hyunsoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.8-9
    • /
    • 2016
  • 본 연구는 Lego Mindstorms NXT 상에서 음악을 재생할 수 있도록 하는 SimMusic language를 정의한다. 재생하고자 하는 악보는 SimMusic language로 작성되고, Lego Mindstorms NXT로 조립된 SimMusic player는 SimMusic 프로그램을 읽고 음악을 재생한다. SimMusic player를 통해 하나의 악보가 재생되는 일련의 과정은 컴퓨터 구조에서 프로그램이 수행되는 과정을 기반으로 구현되었기 때문에, 본 연구는 비전공자도 쉽게 컴퓨터 과학의 기초를 다질 수 있는 계기가 된다.

The Classification of Music Styles on the Basis of Spectral Contrast Features

  • Wang, Yan-bing
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.1
    • /
    • pp.9-14
    • /
    • 2017
  • In this paper, we propose that the contrast features of octave spectrum can be used to show spectral contrast features of some music clips. It shows the relative spectral distribution rather than average spectrum. From the experiment, it can be seen the method of spectral contrast features has a good performance in classification of music styles. Another comparative experiment shows that the method of spectral contrast features can better distinguish different music styles than the method of MFCC features that commonly used previously in the classification system of music styles.

Computer-Supported Piano Performance Science (컴퓨터지원 피아노 연주과학)

  • Roh, Kyeong Won;Eum, Hee Jung;Kim, Hee-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1738-1741
    • /
    • 2019
  • Music performance techniques have been primarily trained by apprenticeship. The technique transfer, which relies on the imitation of experience and actual performance without scientific evidence, required the pianists more time and effort than necessary. However, if the players in the field discover the principles of universally applicable piano playing techniques in collaboration with scientists, they will avoid errors and prepare a new paradigm in the development of piano playing techniques. This is why music performance science is needed. Little has been studied about it in Korea, but it has been activated abroad since the mid-1990s. The core science of music performance science is expected to be computer science fitting data analysis. In this paper, we introduce music performance science for the pianist and present how computer can help it.

Analysis of Musical Characteristic Which is Liked by Variable Age Group (다양한 연령층이 좋아하는 음악특성 분석)

  • Yoon, Sang-Hoon;Kyon, Doo-Heon;Bae, Myong-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.989-990
    • /
    • 2008
  • Most of all popular music is made by genre and specification of music according to age group. Generally Young people of $10{\sim}20$ ages like dance and techno, But old people over 40 age like trot. In this paper, we analyzed characteristic of music which people preferred by an age group. Without relevance with age, we could confirm the factor of music which popular in all age group by analyzing. The common factor of music all of age group liked are slow word, fast beat, repeated and simple melody, and characteristic of frequency in affluent middle tone.

  • PDF

Natural Language Queries for Music Information Retrieval (음악정보 검색에서 이용자 자연어 질의의 정확성 연구)

  • Lee, Jin-Ha
    • Journal of the Korean Society for information Management
    • /
    • v.25 no.4
    • /
    • pp.149-164
    • /
    • 2008
  • Our limited understanding of real-life music information queries is an impediment to developing music information retrieval (MIR) systems that meet the needs of real users. This study aims to contribute to developing a theorized understanding of how people seek music information by an empirical investigation of real-life queries, in particular, focusing on the accuracy of user-provided information and users' uncertainty expressions. This study found that much of users' information is inaccurate; users made various syntactic and semantic errors in providing this information. Despite these inaccuracies and uncertainties, many queries were successful in eliciting correct answers. A theory from pragmatics is suggested as a partial explanation for the unexpected success of inaccurate queries.

Multiclass Music Classification Approach Based on Genre and Emotion

  • Jonghwa Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.27-32
    • /
    • 2024
  • Reliable and fine-grained musical metadata are required for efficient search of rapidly increasing music files. In particular, since the primary motive for listening to music is its emotional effect, diversion, and the memories it awakens, emotion classification along with genre classification of music is crucial. In this paper, as an initial approach towards a "ground-truth" dataset for music emotion and genre classification, we elaborately generated a music corpus through labeling of a large number of ordinary people. In order to verify the suitability of the dataset through the classification results, we extracted features according to MPEG-7 audio standard and applied different machine learning models based on statistics and deep neural network to automatically classify the dataset. By using standard hyperparameter setting, we reached an accuracy of 93% for genre classification and 80% for emotion classification, and believe that our dataset can be used as a meaningful comparative dataset in this research field.

Detection of Music Mood for Context-aware Music Recommendation (상황인지 음악추천을 위한 음악 분위기 검출)

  • Lee, Jong-In;Yeo, Dong-Gyu;Kim, Byeong-Man
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.263-274
    • /
    • 2010
  • To provide context-aware music recommendation service, first of all, we need to catch music mood that a user prefers depending on his situation or context. Among various music characteristics, music mood has a close relation with people‘s emotion. Based on this relationship, some researchers have studied on music mood detection, where they manually select a representative segment of music and classify its mood. Although such approaches show good performance on music mood classification, it's difficult to apply them to new music due to the manual intervention. Moreover, it is more difficult to detect music mood because the mood usually varies with time. To cope with these problems, this paper presents an automatic method to classify the music mood. First, a whole music is segmented into several groups that have similar characteristics by structural information. Then, the mood of each segments is detected, where each individual's preference on mood is modelled by regression based on Thayer's two-dimensional mood model. Experimental results show that the proposed method achieves 80% or higher accuracy.