• Title/Summary/Keyword: Chord composition

Search Result 16, Processing Time 0.02 seconds

Rule-Based Generation of Four-Part Chorus Applied With Chord Progression Learning Model (화성 진행 학습 모델을 적용한 규칙 기반의 4성부 합창 음악 생성)

  • Cho, Won Ik;Kim, Jeung Hun;Cheon, Sung Jun;Kim, Nam Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1456-1462
    • /
    • 2016
  • In this paper, we apply a chord progression learning model to a rule-based generation of a four-part chorus. The proposed system is given a 32-note melody line and completes the four-part chorus based on the rule of harmonics, predicting the chord progression with the CRBM model. The data for the training model was collected from various harmony textbooks, and chord progressions were extracted with key-independent features so as to utilize the given data effectively. It was shown that the output piece obtained with the proposed learning model had a more natural progression than the piece that used only the rule-based approach.

Harmonic Compositions and Progressions for Tonal Characteristics Based on Emotion Vocabulary (정서 어휘에 반영된 선율 특성에 적합한 화음 구성과 전개)

  • Yi, Soo Yon;Chong, Hyun Ju
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.9
    • /
    • pp.265-270
    • /
    • 2017
  • This study purposed to investigate harmonic compositions and progressions that are appropriate for emotion vocabulary. In study 1, eight(8) professional music therapists were asked to provide harmonic compositions and progressions reflecting the tonal characteristics of emotion vocabulary and the rationales. Various attributes of harmonic compositions and progressions were examined and the content analysis were administered. In study 2, the obtained data on study 1 were evaluated by 124 music therapy and music majors for the validity. In the first study, analyzed results showed that 'happy' vocabulary utilized major, tonic, consecutive chord changes, 'angry' vocabulary utilized minor, augmented, $9^{th}$, $11^{th}$, unsolved $7^{th}$ chord progression, 'sad' vocabulary utilized minor, diminish, chromatic chord progressions. In the second study, there was statistically significant difference with 'happy' vocabulary. These results can provide basic evidences for musical ideas of harmonic compositions and progressions to better communicate emotional aspects of lyrical messages when composing melody in a song.

Analysis of Musical Characteristics and Changes in Different Periods on Yoon-Sang's Music (윤상의 곡에 나타난 음악적 특징과 시대별 변화)

  • Park, Ji-Eun;Chung, Jae-Youn
    • Journal of Korea Entertainment Industry Association
    • /
    • v.15 no.1
    • /
    • pp.63-73
    • /
    • 2021
  • This study aims to analyze music of Yoon-sang, as a part of musical research, which is the most fundamental approach among academic studies on Korean popular music. Yoon-Sang is a representative composer, who has gone through the 1980s to the present. The result of analysis of 21 songs created by Yoon-Sang showed that his songs are mostly characterized by tonal music, in which chord relationships develop focusing on keynotes. The reason why his music does not sound uniform pursuing stability is he properly added the progression of chromatic chords, based on diatonic chords and melodies. Dominant 7th chord and diminished 7th chord are used the most among diverse techniques adding chromatic colors. Along with these chords, chromatic intervals are used not only in chord progression but also in melodies. The successive, ascending or descending movement of the base line is his common composition and arrangement technique revealed in every song. One of formal changes with the stream of the times is that the number of measured in the pre-chorus and interlude that were of great importance in his songs of the 1990s decreased over time. With regard to harmonic changes, whereas modulation between parts was applied to his 2 songs created in the 2010s. Yoon-Sang's music had one strong tonality overall, but his music began to have more than two tonalities starting the 2010s, and this is a big variation in his music.

Automatic Generation of a Configured Song with Hierarchical Artificial Neural Networks (계층적 인공신경망을 이용한 구성을 갖춘 곡의 자동생성)

  • Kim, Kyung-Hwan;Jung, Sung Hoon
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.641-647
    • /
    • 2017
  • In this paper, we propose a method to automatically generate a configured song with melodies composed of front/middle/last parts by using hierarchical artificial neural networks in automatic composition. In the first layer, an artificial neural network is used to learn an existing song or a random melody and outputs a song after performing rhythm post-processing. In the second layer, the melody created by the artificial neural network in the first layer is learned by three artificial neural networks of front/middle/last parts in the second layer in order to make a configured song. In the artificial neural network of the second layer, we applied a method to generate repeatability using measure identity in order to make song with repeatability and after that the song is completed after rhythm, chord, tonality post-processing. It was confirmed from experiments that our proposed method produced configured songs well.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.