• Title/Summary/Keyword: Music Recognition

Search Result 173, Processing Time 0.022 seconds

Design of optimal multiplexed filter and an analysis on the similar discrimination for music notatins recognition (음악기보 인식을 위한 다중필터의 설계 및 유사판별 성능분석)

  • Yeun, Jin-Seon;Kim, Nam
    • Journal of the Korean Institute of Telematics and Electronics D
    • /
    • v.34D no.6
    • /
    • pp.65-74
    • /
    • 1997
  • In this paper, SA-multiplexed filter is designed using SA (simulated ananealing) to recognize music notation patterns varying in size, shape, position and having considerably many similar shapes for optical pattern recognition system. This filter has correlation resutls at wanted location and can identify same class, classify similar class for scale-varianted or rotation-varianted music notation patterns havng learning process. Also, the optimum filter is oriented to analyze on the similar discrimination at acquired position using SA and enhances optical diffractive efficiency as well as peak beam intensity. Compared with POF *(phase only filter), cosine-BPOF(cosine-binary phase only filter), that has excellent discrimination capability even if the different rate is 0.1% quantitatively.

  • PDF

Design and Implementation of the Effective Staff-Line Recognition Using Tilt-Correction Through Preview Analysis (프리뷰 분석에 기반한 악보 기울기 보정을 통한 효과적인 오선 인식 기법의 설계 및 구현)

  • Kim, Seongryong;Kim, Taehee;Kim, Misun;Lee, Boram;Kim, Geunjeoung;Lee, Sangjun
    • Journal of IKEEE
    • /
    • v.18 no.3
    • /
    • pp.362-367
    • /
    • 2014
  • Music score recognition applications running on a smartphone, which is one of the necessities of modern people, have already been released on the market. These applications have the several limitations, especially the recognition rate of printed music scores is low so that many errors occur when the score is played. The major factor to decrease the recognition rate comes from poor tilt-correction of the captured staff-line. In this paper, we propose a efficient method that can automatically shoot the printed music score through preview analysis, which increases the recognition rate via tilt-correction.

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

A Covariance-matching-based Model for Musical Symbol Recognition

  • Do, Luu-Ngoc;Yang, Hyung-Jeong;Kim, Soo-Hyung;Lee, Guee-Sang;Dinh, Cong Minh
    • Smart Media Journal
    • /
    • v.7 no.2
    • /
    • pp.23-33
    • /
    • 2018
  • A musical sheet is read by optical music recognition (OMR) systems that automatically recognize and reconstruct the read data to convert them into a machine-readable format such as XML so that the music can be played. This process, however, is very challenging due to the large variety of musical styles, symbol notation, and other distortions. In this paper, we present a model for the recognition of musical symbols through the use of a mobile application, whereby a camera is used to capture the input image; therefore, additional difficulties arise due to variations of the illumination and distortions. For our proposed model, we first generate a line adjacency graph (LAG) to remove the staff lines and to perform primitive detection. After symbol segmentation using the primitive information, we use a covariance-matching method to estimate the similarity between every symbol and pre-defined templates. This method generates the three hypotheses with the highest scores for likelihood measurement. We also add a global consistency (time measurements) to verify the three hypotheses in accordance with the structure of the musical sheets; one of the three hypotheses is chosen through a final decision. The results of the experiment show that our proposed method leads to promising results.

Super-resolution in Music Score Images by Instance Normalization

  • Tran, Minh-Trieu;Lee, Guee-Sang
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.64-71
    • /
    • 2019
  • The performance of an OMR (Optical Music Recognition) system is usually determined by the characterizing features of the input music score images. Low resolution is one of the main factors leading to degraded image quality. In this paper, we handle the low-resolution problem using the super-resolution technique. We propose the use of a deep neural network with instance normalization to improve the quality of music score images. We apply instance normalization which has proven to be beneficial in single image enhancement. It works better than batch normalization, which shows the effectiveness of shifting the mean and variance of deep features at the instance level. The proposed method provides an end-to-end mapping technique between the high and low-resolution images respectively. New images are then created, in which the resolution is four times higher than the resolution of the original images. Our model has been evaluated with the dataset "DeepScores" and shows that it outperforms other existing methods.

A Study on the Variation of Music Characteristics based on User Controlled Music Emotion (음악 감성의 사용자 조절에 따른 음악의 특성 변형에 관한 연구)

  • Nguyen, Van Loi;Xubin, Xubin;Kim, Donglim;Lim, Younghwan
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.421-430
    • /
    • 2017
  • In this paper, research results on the change of music emotion are described. Our gaol was to provide a method of changing music emotion by a human user. Then we tried to find a way of transforming the contents of the original music into the music whose emotion is similar with the changed emotion. For the purpose, a method of changing the emotion of playing music on two-dimensional plan was describe. Then the original music should be transformed into the music which emotion would be equal to the changed emotion. As the first step a method of deciding which music factors and how much should be changed was presented. Finally the experimental method of editing by sound editor for changing the emotion was described. There are so many research results on the recognition of music emotion. But the try of changing the music emotion is very rare. So this paper would open another way of doing research on music emotion field.

Multiple Regression-Based Music Emotion Classification Technique (다중 회귀 기반의 음악 감성 분류 기법)

  • Lee, Dong-Hyun;Park, Jung-Wook;Seo, Yeong-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.239-248
    • /
    • 2018
  • Many new technologies are studied with the arrival of the 4th industrial revolution. In particular, emotional intelligence is one of the popular issues. Researchers are focused on emotional analysis studies for music services, based on artificial intelligence and pattern recognition. However, they do not consider how we recommend proper music according to the specific emotion of the user. This is the practical issue for music-related IoT applications. Thus, in this paper, we propose an probability-based music emotion classification technique that makes it possible to classify music with high precision based on the range of emotion, when developing music related services. For user emotion recognition, one of the popular emotional model, Russell model, is referenced. For the features of music, the average amplitude, peak-average, the number of wavelength, average wavelength, and beats per minute were extracted. Multiple regressions were derived using regression analysis based on the collected data, and probability-based emotion classification was carried out. In our 2 different experiments, the emotion matching rate shows 70.94% and 86.21% by the proposed technique, and 66.83% and 76.85% by the survey participants. From the experiment, the proposed technique generates improved results for music classification.

Samulnori Musicians' Experiences of Object Relations With Their Instruments (사물놀이 연주자의 악기 대상관계 경험)

  • Kim, Cheonsa;Kim, Kyoungsuk
    • Journal of Music and Human Behavior
    • /
    • v.18 no.2
    • /
    • pp.87-107
    • /
    • 2021
  • The purpose of this research was to explore the phenomenon of object relations with musical instruments as experienced by professional Samulnori musicians. The researcher conducted in-depth individual interviews with five Samulnori players who also completed questionnaires with open-ended questions. The data were analyzed using Giorgi(2004)'s phenomenological methodology. The results offered 121 semantic units, seven subcategories, and three main categories. The three main categories were transitional object, object of expression and recognition of internal desires, and object for recognition of others and communication. These results suggest that the ensemble format of Samulnori promotes the development of the musician's object relationship and can externalize the player's internalized representational system and interaction method. This study is significant in that it reveals the endopsychic functional relationship between a musician and their instrument and provides the basis for the use of Samulnori instruments in music therapy.

Emotion-based music visualization using LED lighting control system (LED조명 시스템을 이용한 음악 감성 시각화에 대한 연구)

  • Nguyen, Van Loi;Kim, Donglim;Lim, Younghwan
    • Journal of Korea Game Society
    • /
    • v.17 no.3
    • /
    • pp.45-52
    • /
    • 2017
  • This paper proposes a new strategy of emotion-based music visualization. Emotional LED lighting control system is suggested to help audiences enhance the musical experience. In the system, emotion in music is recognized by a proposed algorithm using a dimensional approach. The algorithm used a method of music emotion variation detection to overcome some weaknesses of Thayer's model in detecting emotion in a one-second music segment. In addition, IRI color model is combined with Thayer's model to determine LED light colors corresponding to 36 different music emotions. They are represented on LED lighting control system through colors and animations. The accuracy of music emotion visualization achieved to over 60%.

Musical Genre Classification Based on Deep Residual Auto-Encoder and Support Vector Machine

  • Xue Han;Wenzhuo Chen;Changjian Zhou
    • Journal of Information Processing Systems
    • /
    • v.20 no.1
    • /
    • pp.13-23
    • /
    • 2024
  • Music brings pleasure and relaxation to people. Therefore, it is necessary to classify musical genres based on scenes. Identifying favorite musical genres from massive music data is a time-consuming and laborious task. Recent studies have suggested that machine learning algorithms are effective in distinguishing between various musical genres. However, meeting the actual requirements in terms of accuracy or timeliness is challenging. In this study, a hybrid machine learning model that combines a deep residual auto-encoder (DRAE) and support vector machine (SVM) for musical genre recognition was proposed. Eight manually extracted features from the Mel-frequency cepstral coefficients (MFCC) were employed in the preprocessing stage as the hybrid music data source. During the training stage, DRAE was employed to extract feature maps, which were then used as input for the SVM classifier. The experimental results indicated that this method achieved a 91.54% F1-score and 91.58% top-1 accuracy, outperforming existing approaches. This novel approach leverages deep architecture and conventional machine learning algorithms and provides a new horizon for musical genre classification tasks.