• Title/Summary/Keyword: Music Performance

Search Result 657, Processing Time 0.031 seconds

Haptic-based Music Experience Technology Trends for the Hearing Impaired (청각장애인을 위한 햅틱 기반 음악 실감 기술 동향)

  • Y.M. Song;S.Y. Shin;C.Y. Jeong;M.S. Kim
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.1
    • /
    • pp.74-82
    • /
    • 2024
  • Music is a means of emotional expression and self-expression. In addition, it allows to interact with others and communicate with the world. Music may be considered as inaccessible for people with hearing impairment, who are sometimes discriminated from the music community. We explore trends in various technologies and research that enable everyone to access and enjoy music through experiences that leverage new and innovative technological approaches and bridge the gap between people with and without hearing impairment. Various aspects of haptic systems are being studied, but most of them are performance-oriented and focus only on technical functions. As research matures, more detailed and new studies that converge with various senses are being attempted. These studies will likely evolve into influential research areas that can positively affect the lives of people in terms of accessibility and inclusion by providing detailed functions and stimuli to specific users, including those with hearing impairment.

Camera-based Music Score Recognition Using Inverse Filter

  • Nguyen, Tam;Kim, SooHyung;Yang, HyungJeong;Lee, GueeSang
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.11-17
    • /
    • 2014
  • The influence of acquisition environment on music score images captured by a camera has not yet been seriously examined. All existing Optical Music Recognition (OMR) systems attempt to recognize music score images captured by a scanner under ideal conditions. Therefore, when such systems process images under the influence of distortion, different viewpoints or suboptimal illumination effects, the performance, in terms of recognition accuracy and processing time, is unacceptable for deployment in practice. In this paper, a novel, lightweight but effective approach for dealing with the issues caused by camera based music scores is proposed. Based on the staff line information, musical rules, run length code, and projection, all regions of interest are determined. Templates created from inverse filter are then used to recognize the music symbols. Therefore, all fragmentation and deformation problems, as well as missed recognition, can be overcome using the developed method. The system was evaluated on a dataset consisting of real images captured by a smartphone. The achieved recognition rate and processing time were relatively competitive with state of the art works. In addition, the system was designed to be lightweight compared with the other approaches, which mostly adopted machine learning algorithms, to allow further deployment on portable devices with limited computing resources.

Music Recommendation Technique Using Metadata (메타데이터를 이용한 음악 추천 기법)

  • Lee, Hye-in;Youn, Sung-dae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.75-78
    • /
    • 2018
  • Recently, the amount of music that can be heard is increasing exponentially due to the growth of the digital music market. Because of this, online music service users have had difficulty choosing their favorite music and have wasted a lot of time. In this paper, we propose a recommendation technique to minimize the difficulty of selection and to reduce wasted time. The proposed technique uses an item - based collaborative filtering algorithm that can recommend items without using personal information. For more accurate recommendation, the user's preference is predicted by using the metadata of the music source and the top-N music with high preference is finally recommended. Experimental results show that the proposed method improves the performance of the proposed method better than it does when the metadata is not used.

  • PDF

Musical Identity Online: A "Netnographic" Perspective of Online Communities

  • Strubel, Jessica;Pookulangara, Sanjukta;Murray, Amber
    • International Journal of Costume and Fashion
    • /
    • v.13 no.2
    • /
    • pp.15-29
    • /
    • 2013
  • Today's technology enables consumers to trade millions of dollars, conduct online banking, access entertainment, and do countless other activities at the click of a button. Online social networks (OSN) have become a cultural phenomenon that allows for individualistic consumerism. Consumers are increasingly utilizing OSN to share ideas, build communities, and contact fellow consumers who are similar to themselves. The relevance of online communities to the music is immense especially because musicians are now using social media to build global audiences. Not only is information about music and performance disseminated online, but musical commodities are sold and traded online. Online music communities allow consumers to elect and create new identities online through the purchase of subcultural commodities. Given the growing economic importance of online music communities it is important to get a holistic view of subcultural communities online. This study utilized content analysis of online music community websites using the Netnography methodology as developed by Kozinet for data collection to analyze consumers' purchasing and consumption behavior of subcultural commodities online as related to the formation of subcultural identities. Findings showed that subcultural items are predominantly purchased online, especially digital music, and there is a need for more custom craft items. The authors presented a new conceptual taxonomy of online subcultural consumer classifications based on online behavior patterns.

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

A relevance-based pairwise chromagram similarity for improving cover song retrieval accuracy (커버곡 검색 정확도 향상을 위한 적합도 기반 크로마그램 쌍별 유사도)

  • Jin Soo Seo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.200-206
    • /
    • 2024
  • Computing music similarity is an indispensable component in developing music search service. This paper proposes a relevance weight of each chromagram vector for cover song identification in computing a music similarity function in order to boost identification accuracy. We derive a music similarity function using the relevance weight based on the probabilistic relevance model, where higher relevance weights are assigned to less frequently-occurring discriminant chromagram vectors while lower weights to more frequently-occurring ones. Experimental results performed on two cover music datasets show that the proposed music similarity improves the cover song identification performance.

Application of computer methods in music composition using smart nanobeams

  • Ying Shi;Maryam Shokravi;X. Chen
    • Advances in nano research
    • /
    • v.17 no.3
    • /
    • pp.285-291
    • /
    • 2024
  • The paper considers one of the new applications of computer methods in music composition, using smart nanobeams-an integration of advanced computational techniques with new, specially designed materials for enhanced performance capabilities in music composition. The research applies some peculiar properties of smart nanobeams, embedded with piezoelectric materials that modulate and control sound vibrations in real-time. The study is conducted to determine the effects of changes in the length, thickness of nanobeams and the applied voltage on acoustical properties and the tone quality of musical instruments with the help of numerical simulations and optimization algorithms. By means of piezo-elasticity theory, different governing equations of nanobeam systems can be derived, which are solved by the numerical method to predict the dynamic behavior of the system under different conditions. Results show that manipulation of the parameters allows great control over pitch, timbre, and resonance of the instrument; such a system offers new ways in which composers and performers can create music. This research also validates the computational model against available theoretical data, proving the accuracy and possible applications of the former. The work thus marks a large step towards the intersection of music composition with smart material technology, and, when further developed, it would mean that smart nanobeams could revolutionize the process for composing and performing music on these instruments.

A Multiple Case Study on the Relationship Between School Music Experiences and Motivation for Music Engagement Among Adults in 20s (학교 음악 경험과 20대 성인의 음악 생활화 동기에 관한 다중사례 연구)

  • Choi, Chi Hyun;Jung, Joo Yeon
    • Journal of Music and Human Behavior
    • /
    • v.21 no.1
    • /
    • pp.1-27
    • /
    • 2024
  • This study investigates the link between music integration in the lives of adults in their twenties and their school music experiences. Ten individuals in their twenties were interviewed to explore their experiences based on the self-determination theory's fundamental psychological needs (autonomy, competence, and relatedness). Participants were categorized into an active music engagement group (5 individuals) and an inactive group (5 individuals) for individual interviews. Transcripts were analyzed following the five steps of grounded theory data analysis technique. Results indicated a strong connection between music activities during school years and current motivation for music integration, associated with the fulfillment of psychological needs outlined in the self-determination theory. Particularly, this study identified the instructional methods, school music activities, and performance evaluations as closely related to autonomy, competence, and relatedness. It offers a comprehensive analysis of how experiences in these areas during school music activities correlate with values and motivations for music integration in adulthood. Additionally, the study suggests ways to promote the voluntary incorporation of music into life through positive experiences of autonomy, competence, and relatedness in music activities.

Speech/Music Discrimination Using Spectral Peak Track Analysis (스펙트럴 피크 트랙 분석을 이용한 음성/음악 분류)

  • Keum, Ji-Soo;Lee, Hyon-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.243-244
    • /
    • 2006
  • In this study, we propose a speech/music discrimination method using spectral peak track analysis. The proposed method uses the spectral peak track's duration at the same frequency channel for feature parameter. And use the duration threshold to discriminate the speech/music. Experiment result, correct discrimination ratio varies according to threshold, but achieved a performance comparable to another method and has a computational efficient for discrimination.

  • PDF

An Efficient Frequent Melody Indexing Method to Improve Performance of Query-By-Humming System (허밍 질의 처리 시스템의 성능 향상을 위한 효율적인 빈번 멜로디 인덱싱 방법)

  • You, Jin-Hee;Park, Sang-Hyun
    • Journal of KIISE:Databases
    • /
    • v.34 no.4
    • /
    • pp.283-303
    • /
    • 2007
  • Recently, the study of efficient way to store and retrieve enormous music data is becoming the one of important issues in the multimedia database. Most general method of MIR (Music Information Retrieval) includes a text-based approach using text information to search a desired music. However, if users did not remember the keyword about the music, it can not give them correct answers. Moreover, since these types of systems are implemented only for exact matching between the query and music data, it can not mine any information on similar music data. Thus, these systems are inappropriate to achieve similarity matching of music data. In order to solve the problem, we propose an Efficient Query-By-Humming System (EQBHS) with a content-based indexing method that efficiently retrieve and store music when a user inquires with his incorrect humming. For the purpose of accelerating query processing in EQBHS, we design indices for significant melodies, which are 1) frequent melodies occurring many times in a single music, on the assumption that users are to hum what they can easily remember and 2) melodies partitioned by rests. In addition, we propose an error tolerated mapping method from a note to a character to make searching efficient, and the frequent melody extraction algorithm. We verified the assumption for frequent melodies by making up questions and compared the performance of the proposed EQBHS with N-gram by executing various experiments with a number of music data.