• Title/Summary/Keyword: 음악장르구분

Search Result 24, Processing Time 0.025 seconds

Development of Music Classification of Light and Shade using VCM and Beat Tracking (VCM과 Beat Tracking을 이용한 음악의 명암 분류 기법 개발)

  • Park, Seung-Min;Park, Jun-Heong;Lee, Young-Hwan;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.884-889
    • /
    • 2010
  • Recently, a music genre classification has been studied. However, experts use different criteria to classify each of these classifications is difficult to derive accurate results. In addition, when the emergence of a new genre of music genre is a newly re-defined. Music as a genre rather than to separate search should be classified as emotional words. In this paper, the feelings of people on the basis of brightness and darkness tries to categorize music. The proposed classification system by applying VCM(Variance Considered Machines) is the contrast of the music. In this paper, we are using three kinds of musical characteristics. Based on surveys made throughout the learning, based on musical attributes(beat, timbre, note) was used to study in the VCM. VCM is classified by the trained compared with the results of the survey were analyzed. Note extraction using the MATLAB, sampled at regular intervals to share music via the FFT frequency analysis by the sector average is defined as representing the element extracted note by quantifying the height of the entire distribution was identified. Cumulative frequency distribution in the entire frequency rage, using the difference in Timbre and were quantified. VCM applied to these three characteristics with the experimental results by comparing the survey results to see the contrast of the music with a probability of 95.4% confirmed that the two separate.

The Study on Expressive Methods for Vocal Improvistion using Articulation and Syllables (Articulation과 Syllables를 이용한 보컬즉흥연주 표현에 관한 연구)

  • Bang, Hyun-Seung
    • Proceedings of the KAIS Fall Conference
    • /
    • 2011.05b
    • /
    • pp.694-697
    • /
    • 2011
  • 재즈음악의 즉흥연주(Improvisation)는, 재즈음악의 발생과 함께 발전되어 온 연주 형태의 하나이며, 재즈음악을 대표하는 상징적인 요소라고 할 수 있다. 대부분의 사람들에게 즉흥연주는 재즈의 가장 중요한 요소로써 여겨지고 있다. 재즈 음악인들은 종종 '재즈'와 '즉흥연주'를 같은 의미를 가진 동일 어처럼 사용하기도 한다. 한편, 재즈보컬리스트들은 기악연주자와는 달리, 즉흥연주를 하지 않아도 된다는 인식이 있는 것은 사실이다. 그러나 그 중요성에 차이가 있을 뿐, 보컬즉흥연주 역시 재즈보컬들의 특성을 다른 장르의 보컬들과 구분하는 상징적이고 핵심적인 요소임에는 틀림없다. 컬리스트의 경우, 음악이론 외에, 즉흥연주의 선율을 가창하기 위하여 필요한 또 하나의 요소가 있는데, 이는 즉흥연주의 악구를 표현하는 수단인 스캣음절이다. 본 논문에서는 즉흥연주 선율표현에 많이 사용되는 음표들과 연주기법들을 중심으로 스캣음절사용에 대하여 연구해 보고자 한다.

  • PDF

A Research on the Audio Utilization Method for Generating Movie Genre Metadata (영화 장르 메타데이터 생성을 위한 오디오 활용 방법에 대한 연구)

  • Yong, Sung-Jung;Park, Hyo-Gyeong;You, Yeon-Hwi;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.284-286
    • /
    • 2021
  • With the continuous development of the Internet and digital, platforms are emerging to store large amounts of media data and provide customized services to individuals through online. Companies that provide these services recommend movies that suit their personal tastes to promote media consumption. Each company is doing a lot of research on various algorithms to recommend media that users prefer. Movies are divided into genres such as action, melodrama, horror, and drama, and the film's audio (music, sound effect, voice) is an important production element that makes up the film. In this research, based on movie trailers, we extract audio for each genre, check the commonalities of audio for each genre, distinguish movie genres through supervised learning of artificial intelligence, and propose a utilization method for generating metadata in the future.

  • PDF

A Study on the Efficient Feature Vector Extraction for Music Information Retrieval System (음악 정보검색 시스템을 위한 효율적인 특징 벡터 추출에 관한 연구)

  • 윤원중;이강규;박규식
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.7
    • /
    • pp.532-539
    • /
    • 2004
  • In this Paper, we propose a content-based music information retrieval (MIR) system base on the query-by-example (QBE) method. The proposed system is implemented to retrieve queried music from a dataset where 60 music samples were collected for each of the four genres in Classical, Hiphop. Jazz. and Reck. resulting in 240 music files in database. From each query music signal, the system extracts 60 dimensional feature vectors including spectral centroid. rolloff. flux base on STFT and also the LPC. MFCC and Beat information. and retrieves queried music from a trained database set using Euclidean distance measure. In order to choose optimum features from the 60 dimension feature vectors, SFS method is applied to draw 10 dimension optimum features and these are used for the Proposed system. From the experimental result. we can verify the superior performance of the proposed system that provides success rate of 84% in Hit Rate and 0.63 in MRR which means near 10% improvements over the previous methods. Additional experiments regarding system Performance to random query Patterns (or portions) and query lengths have been investigated and a serious instability problem of system Performance is Pointed out.

A Study on Music Contents Recommendation Service using Emotional Words (감성어휘를 이용한 음악콘텐츠 추천 서비스의 연구)

  • Jang, Eun-Ji
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2008.05a
    • /
    • pp.43-48
    • /
    • 2008
  • And this study intends to discuss especially the one using emotional filter among various information processing methods. The existing music recommendation service on the web has a weak point that it makes the user feel bored by recommending songs only with similar feeling of the same genre, because music is classified by tune, melody, atmosphere and genre before recommendation. The service using emotion filter, suggested in this study, recommends the song and lyrics appropriate to the current emotional state of the user by abstracting emotional words that could reflect the sensitivity of human and then search the words within lyrics to match in order to overcome the weak point of the existing service. This study starts where the current emotional status for the user is being input. As for the range to choose, there are the seven representatives of emotion which are, love, separation, joy, sorrow-gloom, happiness-lonesome, and anger. As the service receives input of user's emotion, it matches the emotional words appropriate for the emotion input with the lyrics, and ranks the lyrics in the order of priority, so that it recommends the song and it lyrics to the user.

  • PDF

A Study on the Music Retrieval System using MPEG-7 Audio Low-Level Descriptors (MPEG-7 오디오 하위 서술자를 이용한 음악 검색 방법에 관한 연구)

  • Park Mansoo;Park Chuleui;Kim Hoi-Rin;Kang Kyeongok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.215-218
    • /
    • 2003
  • 본 논문에서는 MPEG-7에 정의된 오디오 서술자를 이용한 오디오 특징을 기반으로 한 음악 검색 알고리즘을 제안한다. 특히 timbral 특징들은 음색 구분을 용이하게 할 수 있어 음악 검색뿐만 아니라 음악 장르 분류 또는 Query by humming에 이용 될 수 있다. 이러한 연구를 통하여 오디오 신호의 대표적인 특성을 표현 할 수 있는 특징벡터를 구성 할 수 있다면 추후에 멀티모달 시스템을 이용한 검색 알고리즘에도 오디오 특징으로 이용 될 수 있을 것이다 본 논문에서는 방송 시스템에 적용 할 수 있도록 검색 범위를 특정 컨텐츠의 O.S.T 앨범으로 제한하였다. 즉, 사용자가 임의로 선택한 부분적인 오디오 클립만을 이용하여 그 컨텐츠 전체의 O.S.T 앨범 내에서 음악을 검색할 수 있도록 하였다. 오디오 특징벡터를 구성하기 위한 MPEG-7 오디오 서술자의 조합 방법을 제안하고 distance 또는 ratio 계산 방식을 통해 성능 향상을 추구하였다. 또한 reference 음악의 템플릿 구성 방식의 변화를 통해 성능 향상을 추구하였다. Classifier로 k-NN 방식을 사용하여 성능 평가를 수행한 결과 timbral spectral feature들의 비율을 이용한 IFCR(Intra-Feature Component Ratio) 방식이 Euclidean distance 방식보다 우수한 성능을 보였다.

  • PDF

An Analysis of Timbre Comparison between Jeongak Daegeum and Sanjo Daegeum (정악대금과 산조대금의 음색 특징 분석)

  • Sung, Ki-Young
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.3
    • /
    • pp.229-236
    • /
    • 2020
  • In this paper, the tone of Daegeum, one of the most representative wind instruments of our country, was analyzed. Daegeum is widely used as Jeongak Daegeum and Sanjo Daegeum, which are played in royal and wind music, and Sanjo Daegeum is mainly played in Sanjo, Sinawi and folk music. The reason why the two pieces of music are being played in different music genres is due to the improvement of the length of the pipe and the location of the finger holes, allowing the Sanjo Daegeum to perform faster than Jeongak Daegeum, apply various techniques, and make the choice of musical instruments harmonized with music by making the difference in tone. For timber analysis of Jeongak Daegeum and Sanjo Daegeum, the composition of the overtones was visually verified through Spectrogram and Spectrum Analizer, in which the results of recordings were recorded by playing octave low, flat, and octave high positions with the same power. From this, Jeongak Daegeum, which is rich in low-pitched sound, harmonizes with solemn music such as royal music, and Sanjo Daegeum, which has a relatively clear high-pitched sound, is well suited to bright music such as solo music.

Content-based Music Information Retrieval using Pitch Histogram (Pitch 히스토그램을 이용한 내용기반 음악 정보 검색)

  • 박만수;박철의;김회린;강경옥
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.2-7
    • /
    • 2004
  • In this paper, we proposed the content-based music information retrieval technique using some MPEG-7 low-level descriptors. Especially, pitch information and timbral features can be applied in music genre classification, music retrieval, or QBH(Query By Humming) because these can be modeling the stochasticpattern or timbral information of music signal. In this work, we restricted the music domain as O.S.T of movie or soap opera to apply broadcasting system. That is, the user can retrievalthe information of the unknown music using only an audio clip with a few seconds extracted from video content when background music sound greeted user's ear. We proposed the audio feature set organized by MPEG-7 descriptors and distance function by vector distance or ratio computation. Thus, we observed that the feature set organized by pitch information is superior to timbral spectral feature set and IFCR(Intra-Feature Component Ratio) is better than ED(Euclidean Distance) as a vector distance function. To evaluate music recognition, k-NN is used as a classifier

The Identity of the Hyangje Samhyunyukgak (향제 삼현육각의 특징)

  • Im, Hye-Jung
    • (The) Research of the performance art and culture
    • /
    • no.39
    • /
    • pp.749-774
    • /
    • 2019
  • In the situation where the interest of the academics related to the Hyangje Samhyunyukgak is increasing, the task of identifying the identity of the Hyangje Samhyunyukgak should precede. In this paper, we will discuss the characteristics of the Hyangje Samhyunyukgak distinguished from the court style Samhyunyukgak. First, we will discuss the characteristics of instrument organization. In the form of the Hyangje Samhyunyukgak, the composition of the musical instrument is flexible. Depending on circumstances such as the geographic region or the composition of the player, the set of instruments were added or reduced. The second part relates to composition of music. Among the various pieces of music, a piece of music to be selected in a specific situation is music pieces related to the use of the piece of music in depth. In this phenomenon, the music with the greatest change is Geosangak(거상악). The music played as Geosangak repertoire showed various musical pieces that are related to various situations in different regions. Finally, I would like to discuss the problems related to the origins of Hyangje Samhyunyukgak music. Compared to the songs of Ginyeombul(긴염불), Gutgeori(굿거리), and Taryeong(타령), it is difficult to totally exclude the relationship from the local music genre. And I could guess that such a common denominator was closely related to the Jangdan.

A Study on Abstract Synesthesia for Visual Music (비쥬얼 뮤직에 나타난 추상적 공감각에 관한 연구)

  • Kim, Ho
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.484-492
    • /
    • 2016
  • Role of music in an image can be divided into a supplementary function to express narrative of image and an independent function to become the subject of music and to lead the image. To perceive sound through hearing and then, to make it visualized is called visual music. Since the 19th century, image synchronization of music through colored hearing has been continuously attempted by artists in their works. Also in the 20th century, many artists could attempt time-wise concept of movement passing the bounds of three-dimensional expression due to development of cinema. In such a process, artists with strong experimental spirits inferred correlation between sound and image and then, pioneered new genre of visual music. As a result, the times are being changed from those of listening to those of watching, and various works are being produced by experimental attempt of various music and images. This thesis aims to investigate aesthetic characteristics of modern visual music and then, to conduct comparative analysis on how visual music using colors are utilized in diversified fields such as movie, animation, music video and media art.