• Title/Summary/Keyword: Music Engineering

Search Result 612, Processing Time 0.022 seconds

Localization and size estimation for breaks in nuclear power plants

  • Lin, Ting-Han;Chen, Ching;Wu, Shun-Chi;Wang, Te-Chuan;Ferng, Yuh-Ming
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.193-206
    • /
    • 2022
  • Several algorithms for nuclear power plant (NPP) break event detection, isolation, localization, and size estimation are proposed. A break event can be promptly detected and isolated after its occurrence by simultaneously monitoring changes in the sensing readings and by employing an interquartile range-based isolation scheme. By considering the multi-sensor data block of a break to be rank-one, it can be located as the position whose lead field vector is most orthogonal to the noise subspace of that data block using the Multiple Signal Classification (MUSIC) algorithm. Owing to the flexibility of deep neural networks in selecting the best regression model for the available data, we can estimate the break size using multiple-sensor recordings of the break regardless of the sensor types. The efficacy of the proposed algorithms was evaluated using the data generated by Maanshan NPP simulator. The experimental results demonstrated that the MUSIC method could distinguish two near breaks. However, if the two breaks were close and of small sizes, the MUSIC method might wrongly locate them. The break sizes estimated by the proposed deep learning model were close to their actual values, but relative errors of more than 8% were seen while estimating small breaks' sizes.

Music Genre Classification using Spikegram and Deep Neural Network (스파이크그램과 심층 신경망을 이용한 음악 장르 분류)

  • Jang, Woo-Jin;Yun, Ho-Won;Shin, Seong-Hyeon;Cho, Hyo-Jin;Jang, Won;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.693-701
    • /
    • 2017
  • In this paper, we propose a new method for music genre classification using spikegram and deep neural network. The human auditory system encodes the input sound in the time and frequency domain in order to maximize the amount of sound information delivered to the brain using minimum energy and resource. Spikegram is a method of analyzing waveform based on the encoding function of auditory system. In the proposed method, we analyze the signal using spikegram and extract a feature vector composed of key information for the genre classification, which is to be used as the input to the neural network. We measure the performance of music genre classification using the GTZAN dataset consisting of 10 music genres, and confirm that the proposed method provides good performance using a low-dimensional feature vector, compared to the current state-of-the-art methods.

The Effect of Stress Reduction of Human Body by the Vibroacoustic Equipment (음향진동장치에 의한 인체의 스트레스 저감 효과)

  • Moon, D.H.;Kim, Y.W.
    • Journal of Power System Engineering
    • /
    • v.11 no.2
    • /
    • pp.32-37
    • /
    • 2007
  • The present study describes the effects of music and vibroacoustic stimuli to the relaxation of human body. We have carried out the experiment on 6 human subjects of which are composed 3 men and 3 women. We have investigated the electroencephalogram(EEG) of all subjects before and after the stimuli of which are made a strong noise or the meditatiom music and the acoustic vibration. The vibroacoustic device has transmitted meditation music as vibration between 20Hz and 250Hz to the body. From the experimental results, we made sure the effects that the meditation music and vibroacoustic stimuli influenced the stress reduction of human body for good as alpha wave was increased continuously during the good stimuli and after that.

  • PDF

A Study of Methods of Rest for Reduction of The Night Shift Workers′Workload (야간작업자의 작업부담경감을 위한 휴식방법)

  • 김대호;박근상
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.23 no.57
    • /
    • pp.1-10
    • /
    • 2000
  • The purpose of this paper is to propose a method of rest to reduce work load of night shift workers for night shift work. The experiment was carried out 10minutes preparing time, 45minutes first work, 10minutes first rest, 45minutes second work, 10minutes second rest between 2 and 4 o'clock that the lowest physiological function of workers. The methods of rest set up as four patterns (1) non-action rest (2) non-action rest + listening music (3) action rest + non-action rest, (4) action rest + non-action rest + listening music. For the measurements of experiment, heart rates(R-R interval), critical flicker fusion frequency(CFF), blood pressure, oral temperature, reaction time and error rates were considered as criteria for work performance. As a result, action rest + non-action rest and action rest + non-action rest + listening music were more effective to reduce work load additional work than non-action rest and non-action rest + listening music.

  • PDF

Music Recommendation Technique Using Metadata (메타데이터를 이용한 음악 추천 기법)

  • Lee, Hye-in;Youn, Sung-dae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.75-78
    • /
    • 2018
  • Recently, the amount of music that can be heard is increasing exponentially due to the growth of the digital music market. Because of this, online music service users have had difficulty choosing their favorite music and have wasted a lot of time. In this paper, we propose a recommendation technique to minimize the difficulty of selection and to reduce wasted time. The proposed technique uses an item - based collaborative filtering algorithm that can recommend items without using personal information. For more accurate recommendation, the user's preference is predicted by using the metadata of the music source and the top-N music with high preference is finally recommended. Experimental results show that the proposed method improves the performance of the proposed method better than it does when the metadata is not used.

  • PDF

A relevance-based pairwise chromagram similarity for improving cover song retrieval accuracy (커버곡 검색 정확도 향상을 위한 적합도 기반 크로마그램 쌍별 유사도)

  • Jin Soo Seo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.200-206
    • /
    • 2024
  • Computing music similarity is an indispensable component in developing music search service. This paper proposes a relevance weight of each chromagram vector for cover song identification in computing a music similarity function in order to boost identification accuracy. We derive a music similarity function using the relevance weight based on the probabilistic relevance model, where higher relevance weights are assigned to less frequently-occurring discriminant chromagram vectors while lower weights to more frequently-occurring ones. Experimental results performed on two cover music datasets show that the proposed music similarity improves the cover song identification performance.

Content-based Music Information Retrieval using Pitch Histogram (Pitch 히스토그램을 이용한 내용기반 음악 정보 검색)

  • 박만수;박철의;김회린;강경옥
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.2-7
    • /
    • 2004
  • In this paper, we proposed the content-based music information retrieval technique using some MPEG-7 low-level descriptors. Especially, pitch information and timbral features can be applied in music genre classification, music retrieval, or QBH(Query By Humming) because these can be modeling the stochasticpattern or timbral information of music signal. In this work, we restricted the music domain as O.S.T of movie or soap opera to apply broadcasting system. That is, the user can retrievalthe information of the unknown music using only an audio clip with a few seconds extracted from video content when background music sound greeted user's ear. We proposed the audio feature set organized by MPEG-7 descriptors and distance function by vector distance or ratio computation. Thus, we observed that the feature set organized by pitch information is superior to timbral spectral feature set and IFCR(Intra-Feature Component Ratio) is better than ED(Euclidean Distance) as a vector distance function. To evaluate music recognition, k-NN is used as a classifier

Investigation of Timbre-related Music Feature Learning using Separated Vocal Signals (분리된 보컬을 활용한 음색기반 음악 특성 탐색 연구)

  • Lee, Seungjin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1024-1034
    • /
    • 2019
  • Preference for music is determined by a variety of factors, and identifying characteristics that reflect specific factors is important for music recommendations. In this paper, we propose a method to extract the singing voice related music features reflecting various musical characteristics by using a model learned for singer identification. The model can be trained using a music source containing a background accompaniment, but it may provide degraded singer identification performance. In order to mitigate this problem, this study performs a preliminary work to separate the background accompaniment, and creates a data set composed of separated vocals by using the proven model structure that appeared in SiSEC, Signal Separation and Evaluation Campaign. Finally, we use the separated vocals to discover the singing voice related music features that reflect the singer's voice. We compare the effects of source separation against existing methods that use music source without source separation.

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Seok, MooHyun;Kim, HyungGi
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.68-77
    • /
    • 2020
  • VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.