• Title/Summary/Keyword: Music Engineering

Search Result 611, Processing Time 0.027 seconds

Analysis of Musical Characteristic Which is Liked by Variable Age Group (다양한 연령층이 좋아하는 음악특성 분석)

  • Yoon, Sang-Hoon;Kyon, Doo-Heon;Bae, Myong-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.989-990
    • /
    • 2008
  • Most of all popular music is made by genre and specification of music according to age group. Generally Young people of $10{\sim}20$ ages like dance and techno, But old people over 40 age like trot. In this paper, we analyzed characteristic of music which people preferred by an age group. Without relevance with age, we could confirm the factor of music which popular in all age group by analyzing. The common factor of music all of age group liked are slow word, fast beat, repeated and simple melody, and characteristic of frequency in affluent middle tone.

  • PDF

A Study on Signal Analysis of Korean Traditional Music Instrument, Kayakeum and Piri (국악 악기 가야금과 피리의 신호 분석에 관한 연구)

  • Lee Sang-Min;Lee Jong-Seok;Lee Kwang-Hyung
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.247-250
    • /
    • 1999
  • Like any other music, Korean traditional music make a beautiful compound melody of many music instruments. In this paper, we separate melody especially played by two instruments, that is Kayakeum, Piri(Korean pipe) analysing each audio signal. Kayakeum, Piri have a unique frequency component for each sound height. Therefore each melody of them can be expressed into each sheet of notation separately and MIDI codes. We expect that this paper will benefit all the people studying and instructing Korean music.

  • PDF

A Study on ISAR Imaging Algorithm for Radar Target Recognition (표적 구분을 위한 ISAR 영상 기법에 대한 연구)

  • Park, Jong-Il;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.19 no.3
    • /
    • pp.294-303
    • /
    • 2008
  • ISAR(Inverse Synthetic Aperture Radar) images represent the 2-D(two-dimensional) spatial distribution of RCS (Radar Cross Section) of an object, and they can be applied to the problem of target identification. A traditional approach to ISAR imaging is to use a 2-D IFFT(Inverse Fast Fourier Transform). However, the 2-D IFFT results in low resolution ISAR images especially when the measured frequency bandwidth and angular region are limited. In order to improve the resolution capability of the Fourier transform, various high-resolution spectral estimation approaches have been applied to obtain ISAR images, such as AR(Auto Regressive), MUSIC(Multiple Signal Classification) or Modified MUSIC algorithms. In this study, these high-resolution spectral estimators as well as 2-D IFFT approach are combined with a recently developed ISAR image classification algorithm, and their performances are carefully analyzed and compared in the framework of radar target recognition.

A Study on the Music Therapy Management Model Based on Text Mining (텍스트 마이닝 기반의 음악치료 관리 모델에 관한 연구)

  • Park, Seong-Hyun;Kim, Jae-Woong;Kim, Dong-Hyun;Cho, Han-Jin
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.8
    • /
    • pp.15-20
    • /
    • 2019
  • Music therapy has shown many benefits in the treatment of disabled children and the mind. Today's music therapy system is a situation where no specific treatment system has been built. In order for the music therapist to make an accurate treatment, various music therapy cases and treatment history data must be analyzed. Although the most appropriate treatment is given to the client or patient, in reality a number of difficulties are followed due to several factors. In this paper, we propose a music therapy knowledge management model which convergence the existing therapy data and text mining technology. By using the proposed model, similar cases can be searched and accurate and effective treatment can be made for the patient or the client based on specific and reliable data related to the patient. This can be expected to bring out the original purpose of the music therapy and its effect to the maximum, and is expected to be useful for treating more patients.

A Music Recommendation Method Using Emotional States by Contextual Information

  • Kim, Dong-Joo;Lim, Kwon-Mook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.10
    • /
    • pp.69-76
    • /
    • 2015
  • User's selection of music is largely influenced by private tastes as well as emotional states, and it is the unconsciousness projection of user's emotion. Therefore, we think user's emotional states to be music itself. In this paper, we try to grasp user's emotional states from music selected by users at a specific context, and we analyze the correlation between its context and user's emotional state. To get emotional states out of music, the proposed method extracts emotional words as the representative of music from lyrics of user-selected music through morphological analysis, and learns weights of linear classifier for each emotional features of extracted words. Regularities learned by classifier are utilized to calculate predictive weights of virtual music using weights of music chosen by other users in context similar to active user's context. Finally, we propose a method to recommend some pieces of music relative to user's contexts and emotional states. Experimental results shows that the proposed method is more accurate than the traditional collaborative filtering method.

Musical Instrument Conversion based Music Ensemble Application Development for Smartphone (스마트폰을 위한 악기 변환 기반의 음악 합주 애플리케이션 개발)

  • Jang, Won;Cho, Hyo-Jin;Shin, Seong-Hyeon;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.173-181
    • /
    • 2017
  • In this paper, we propose a musical instrument conversion based music ensemble application for smartphone. If we try to create ensemble music using virtual instruments provided by the conventional smartphone application, we should know how to play each instrument. In addition, it is impossible to play the instrument in a natural way if the smartphone screen cannot show the entire part of the instrument. To solve this problem, in this paper, we propose a smartphone application that records music sound by playing an acoustic guitar, converts it to other instruments' sound, applies effect to the converted sound, and creates a final ensemble music after mixing all sounds. Using the proposed application, the user can create ensemble music only by playing the acoustic guitar.

The Statistical Performance Analysis of Satellite Tracking Algorithm for Mobile TT&C (이동위성 관제용 위성 위치 탐지 알고리즘의 통계적 성능 분석)

  • Lee, Yun-Soo;Lee, Byung-Seub;Chung, Won-Chan
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.12
    • /
    • pp.1352-1358
    • /
    • 2007
  • This paper address the statistical charateristics of MUSIC algorithm which is suggested as satellite direction finding algorithm. If the MUSIC algorithm is adopted as a satellite direction detection method in mobile TT&C system, then the statistical performance of the MUSIC algorithm will be closely related with the overall performance of the system. So statistical characteristics of the parameter in the respect of SNR and data length are addressed and then analyse the final effects to the satellite direction finding.

A Study on Visual and Auditory Emotion under Color and Music Stimuli (색과 음악 자극에 의한 시청각 감성지표에 관한 연구)

  • 김남균;김지훈;유충기
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.539-546
    • /
    • 1998
  • The purpose of this study is to estimate human emotion quantitatively under colors and music stimuli and to examine the for correlation between color and music sensibility. Physiological signals(electroencephalogram, electrocardiogram, Galvanic skin conductivity and respiration rate) were measured to compare color with music sensibilities. The personality of the subject were investigated using factor analysis and semantic differential method of 20 items(7 interval scaled). The results showed that red, yellow and violets color provoked active and exciting senses mainly as dance, rock and blues music. While blue, cyan and pink colors ware involved in tranquil and resting emotions deeply as classic and ballade music.

  • PDF

Analysis and Prevention of Contents Exposure in Music Streaming (음악 스트리밍 서비스에서 음원과 메타데이터 노출 분석력 및 방지 방안)

  • Jung, Woo-sik;Nam, Hyun-gyu;Lee, Young-seok
    • KNOM Review
    • /
    • v.21 no.2
    • /
    • pp.10-17
    • /
    • 2018
  • With the popularization of smart devices and the development of wireless Internet, the consumption method of music contents is changing to streaming rather than downloading. In this paper, we analyze the meta data of the music source exposed on the network traffic for the 19 music streaming services in Korea and abroad. We propose a preventive measure. As a result of analysis, we found that all of the 19 services were exposed to the metadata of the music. We propose a music and metadata protection method such as certificate fixing method to prevent exposure of such music and metadata.

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.