• Title/Summary/Keyword: 코드작곡

Search Result 12, Processing Time 0.014 seconds

The Association of Auditive and Visual Meanings in (영화 <이미테이션 게임>에 나타난 청각적·시각적 의미의 연합)

  • Ahn, Soo Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.7
    • /
    • pp.83-92
    • /
    • 2021
  • The goal of this paper is to explore the relationship between visual and auditive meanings of the film "The Imitation Game"(2013) which is directed by Morten Tyldum, and with music composed by Alexandre Desplat. The author accepted Zwikowski's idea of analyzing how film music and visual information created a chemical and meaningful signal for the audience. Lawrence Zbikowski has used Conceptual Integration Networks (CIN), devised by Golles Fauconnier and Mark Turner, to analyze latent meanings which was produced by the association of visual and auditive meanings. Therefore, The author applied the CIN analyzing methodology to the research combination of music and imagery in "The Imitaion Game." Desplat, the composer, has used aeolian instead of tonality to represent the main character. The aeolian melodic line was changed to create a similar atmosphere of imagery and narratives. The irregular time and minor chord was associated with unstable emotion and familiar intervals, while the major chord was associated with stable feelings. Desplat also applied instrumental diversity and extreme changes of dynamic to create positive or negative cognition. The author, therefore, found how the meanings of auditive materials and visual information combined and emphasized encoded messages in the film.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.