• Title/Summary/Keyword: Musical noise

Search Result 33, Processing Time 0.018 seconds

Baleen Whale Sound Synthesis using a Modified Spectral Modeling (수정된 스펙트럴 모델링을 이용한 수염고래 소리 합성)

  • Jun, Hee-Sung;Dhar, Pranab K.;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.69-78
    • /
    • 2010
  • Spectral modeling synthesis (SMS) has been used as a powerful tool for musical sound modeling. This technique considers a sound as a combination of a deterministic plus a stochastic component. The deterministic component is represented by the series of sinusoids that are described by amplitude, frequency, and phase functions and the stochastic component is represented by a series of magnitude spectrum envelopes that functions as a time varying filter excited by white noise. These representations make it possible for a synthesized sound to attain all the perceptual characteristics of the original sound. However, sometimes considerable phase variations occur in the deterministic component by using the conventional SMS for the complex sound such as whale sounds when the partial frequencies in successive frames differ. This is because it utilizes the calculated phase to synthesize deterministic component of the sound. As a result, it does not provide a good spectrum matching between original and synthesized spectrum in higher frequency region. To overcome this problem, we propose a modified SMS that provides good spectrum matching of original and synthesized sound by calculating complex residual spectrum in frequency domain and utilizing original phase information to synthesize the deterministic component of the sound. Analysis and simulation results for synthesizing whale sounds suggest that the proposed method is comparable to the conventional SMS in both time and frequency domain. However, the proposed method outperforms the SMS in better spectrum matching.

Music Recommendation System in Public Space, DJ Robot, based on Context-awareness and Musical Properties (상황인식 및 음원 속성에 따른 공간 설치형 음악 추천 시스템, DJ로봇)

  • Kim, Byung-O;Han, Dong-Soong
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.286-296
    • /
    • 2010
  • The study of the development of DJ robots is to meet the demands of the music services which are changing very rapidly in the digital and network era. Existing studies, as a whole, develop music services on the premise of personalized environment and equipment, but the DJ robot is on the premise of the open space shared by the public. DJ robot gives priority to traditional space and music. Recently as the hospitality and demand for cultural contents of South Korea expand to worldwide, industrial use of the contents based on traditional or our unique characteristics is getting more and more. Meanwhile, the DJ robot is composed of a combination of two modules. One is to detect changes in the external environment and the other is to set the properties of the music by psychology, emotional engineering, etc. DJ robot detect the footprint of the temperature, humidity, illumination, wind, noise and other environmental factors measured, and will ensure the objectivity of the music source by repeated experiments and verification with human sensibility ergonomics based on Hevner Adjective Circle. DJ robot will change the soundscape of the traditional space being more beautiful and make the revival and prosperity of traditional music with the use of traditional music through BGM.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.