Automatic Composition using Time Series Embedding of RNN Auto-Encoder

RNN Auto-Encoder의 시계열 임베딩을 이용한 자동작곡

  • Kim, Kyung Hwan (Dept. of Electronics and Information Eng., Hansung University) ;
  • Jung, Sung Hoon (Dept. of Electronics and Information Eng., Hansung University)
  • Received : 2018.07.10
  • Accepted : 2018.07.24
  • Published : 2018.08.31


In this paper, we propose an automatic composition method using time series embedding of RNN Auto-Encoder. RNN Auto-Encoder can learn existing songs and can compose new songs from the trained RNN decoder. If one song is fully trained in the RNN Auto-Encoder, the song is embedded into the vector values of RNN nodes in the Auto-Encoder. If we train a lot of songs and apply a specific vector to the decoder of Auto-Encoder, then we can obtain a new song that combines the features of trained multiple songs according to the given vector. From extensive experiments we could find that our method worked well and generated various songs by selecting of the composition vectors.



Supported by : Hansung University


  1. B. Johanson and R. Poli, "GP-music: An Interactive Genetic Programming System for Music Generation with Automated Fitness Raters," Proceeding of the Third Annual Conference, pp. 181-186, 1998.
  2. N. Tokui and H. Iba, "Music Composition with Interactive Evolutionary Computation," Proceeding of the Third International Conference on Generative Art, pp. 215-226, 2000.
  3. C. Chen and R. Miikkulainen, "Creating Melodies with Evolving Recurrent Neural Networks," Proceedings of the 2001 International Joint Conference on Neural Networks, pp. 2241-2246, 2001.
  4. T. Oliwa and M. Wagner, "Composing Music with Neural Networks and Probabilistic Finitestate Machines," Applications of Evolutionary Computing, pp. 503-508, 2008.
  5. H. Kim, B. Kim, and B. Zhang, "Learning Music and Generation of Crossover Music Using Evolutionary Hypernetworks," Proceeding of Korea Computer Congress, pp. 134-138, 2009.
  6. G. Bickerman, S. Bosley, P. Swire, and Rober M. Keller, "Learning to Create Jazz Melodies Using Deep Belief Nets," Proceeding of the International Conference on Computational Creativity, pp. 228-237, 2010.
  7. A.E. Coca, R.A.F. Romero, and L. Zhao, "Generation of Composed Musical Structures Through Recurrent Neural Networks Based on Chaotic Inspiration," Proceeding of International Joint Conference on Neural Networks, pp. 3220-3226, 2011.
  8. J.D. Fernandez and F. Vico, "AI Methods in Algorithmic Composition: A Comprehensive Survey," Journal of Artificial Intelligence Research, Vol. 48, pp. 513-582, 2013.
  9. J. Cho, E.M. Ryu, J. Oh, and S.H. Jung, "Training Method of Artificial Neural Networks for Implementation of Automatic Composition Systems," Korea Information Processing Society Transactions on Software and Data Engineering, Vol. 3, No. 8, pp. 315-320, 2014.
  10. J. Oh, J. Song, K. Kim, and S.H. Jung, "Automatic Composition Using Training Capability of Artificial Neural Networks and Chord Progression," Journal of Korea Multimedia Society, Vol. 18, No. 11, pp. 1358-1366, 2015.
  11. K. Kim and S.H. Jung, "Postprocessing for Tonality and Repeatability, and Average Neural Networks for Training Multiple Songs in Automatic Composition," Journal of Korean Institute of Intelligent Systems, Vol. 26, No. 6, pp. 445-451, 2016.
  12. K. Kim and S.H. Jung, "Adoption of Artificial Neural Network for Rest, Enhanced Post-processing of Beats and Initial Melody Processing for Automatic Composition System," Journal of Korea Digital Contents Society, Vol. 17, No. 6, pp. 449-459, 2016.
  13. K. Kim and S.H. Jung, "Automatic Generation of a Configured Song with Hierarchical Artificial Neural Networks," Journal of Korea Digital Contents Society, Vol. 18, No. 4, pp. 641-647, 2017.
  14. S. Nitish, M. Elman, and S. Ruslan, "Unsupervised Learning of Video Representations Using LSTMs," Proceeding of the 32nd International Conference on Machine Learning, Vol. 37, No. 10, pp. 843-852, 2015.