과제정보
The research of Chohong Min was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education (Grant No. 2019R1A6A1A11051177). The research of Byungjoon Lee was supported by POSCO Science Fellowship of POSCO TJ Park Foundation and NRF grant 2020R1A2C4002378.
참고문헌
- Y. Goldberg and G. Hirst. Neural Network Methods in Natural Language Processing. Morgan & Claypool Publishers, 2017.
- T. Mikolov, S. Kombrink, L. Burget, J. Cernocky, and S. Khudanpur. Extensions of recurrent neural network language model. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5528-5531, 2011.
- K. Yao, G. Zweig, M.Y. Hwang, Y. Shi, and D. Yu. Recurrent neural networks for language understanding. pages 2524-2528. Interspeech, 2013.
- I. Sutskever, J. Martens, and G.E. Hinton. Generating text with recurrent neural networks. In International Conference on Machine Learning (ICML), pages 1017-1024, 2011.
- A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6645-6649, 2013.
- H. Sak, A. Senior, and F. Beaufays. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. Computer Research Repository (CoRR), pages 338-342, 2014.
- S. Liu, N. Yang, M. Li, and M. Zhou. A recursive recurrent neural network for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1491-1500, 2014.
- D. Mandic and J. Chambers. Recurrent Neural Networks for Prediction: Learning Algorithms,Architectures and Stability. Wiley, 2001.
- A. Rather, A. Agarwal, and V. Sastry. Recurrent neural network and a hybrid model for prediction of stock returns. Expert Systems with Applications, pages 3234 - 3241, 2015.
- S. Saha and G. Raghava. Prediction of continuous b-cell epitopes in an antigen using recurrent neural network. Proteins, pages 40-48, 2006.
- S. Amari. Backpropagation and stochastic gradient descent method. Neurocomputing, pages 185 - 196, 1993.
- L. Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT), pages 177-186, 2010.
- L. Bottou. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade: Second Edition, pages 421-436. Springer Berlin Heidelberg, 2012.
- Y. Chauvin and D.E Rumelhart. Backpropagation: Theory, Architectures, and Applications. Developments in Connectionist Theory Series. Taylor & Francis, 2013.
- Y. Bengio, P. Frasconi, and P. Simard. The problem of learning long-term dependencies in recurrent networks. IEEE International Conference on Neural Networks, pages 1183-1188, 1993.
- R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. Proceedings of The 30th International Conference on Machine Learning, 2013.
- S. Strogatz, M. Friedman, A.J. Mallinckrodt, and S. McKay. Nonlinear dynamics and chaos: With applications to physics, biology, chemistry, and engineering. Computers in Physics, pages 532-532, 1994.
- F.H. Croom. Principles of Topology. Dover Books on Mathematics. Dover Publications, 2016.