과제정보
연구 과제 주관 기관 : 미래창조과학부
참고문헌
- B.-T. Zhang, "SNU Videome Project: Human-level Machine Learning from Videos," Communications of the Korean Institute of Information Scientists and Engineers, Vol. 29, No. 2, pp. 17-31, 2011. (in Korean)
- M.-O. Heo, K.-M. Kim,, B.-T. Zhang, "Story Learning Methods from Cartoon Videos via Consecutive Event Embedding," KIISE Winter Conference 2016, pp. 600-602, 2016. (in Korean)
- R. Kiros, Y. Zhu, R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, S. Fidler, "Skip-thought Vectors," NIPS, 2015.
- S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, "Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks," NIPS, pp. 1171-1179, 2015.
- B. Kybartas, and R. Bidarra, "A Survey on Story Generation Techniques for Authoring Computational Narratives," IEEE Transactions on Computational Intelligence and AI in Game, 2016.
- I. Mani, Computational Narratology, the living Handbook of Narratology, 2014.
- M. A. Finlayson, Learning Narrative Structure from Annotated Folktales, PhD thesis, Massachusetts Institute of Technology, 2012.
- B. O'Neill, and M. Riedl, "Dramatis: A Computational Model of Suspense," Proc. of the 28th AAAI conference on Artificial Intelligence, Vol. 2, pp. 944-950, 2014.
- B. Li, and M. Riedl, "Scheherazade: Crowd-powered Interactive Narrative Generation," 29th AAAI Conference on Artificial Intelligence, 2015.
- K. Pichotta, and R. J. Mooney, "Using Sentence-Level LSTM Language Models for Script Inference," ACL-16, 2016.
- K. Pichotta, and R. J. Mooney, "Learning Statistical Scripts with LSTM Recurrent Neural Networks," AAAI, 2016.
- L. J. Martin, P. Ammanabrolu, W. Hancock, S. Singh, B. Harrison, and M. Riedl, "Event Representations for Automated Story Generation with Deep Neural Nets," Proc. of the KDD 2017 Workshop on Machine Learning for Creativity, Halifax, Nova Scotia, Canada., 2017.
- A. Salway, M. Graham, E. Tomadaki, Y. Xu, "Linking Video and Text via Representations of Narrative," AAAI Spring Symposium on Intelligent Multimedia Knowledge Management, pp. 104-112, 2003.
- K. He, et al., "Deep Residual Learning for Image Recognition," CVPR, 2016.
- K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio, "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention," ICML, pp. 2048-2057, 2015.
- J.-W. Ha, K.-M. Kim, and B.-T. Zhang, "Automated Construction of Visual-linguistic Knowledge via Concept Learning from Cartoon Videos," AAAI, 2015.
- K. Simonyan and A. Zisserman, "Two-stream Convolutional Networks for Action Recognition in Videos," NIPS, pp. 568-576, 2014.
- N. Srivastava, E. Mansimov, and R. Salakhutdinov, "Unsupervised Learning of Video Representations using Lstms," ICML, pp. 843-852, 2015.
- C. Vondrick, H. Pirsiavash, and A. Torralba, "Generating Videos with Scene Dynamics," NIPS, pp. 613-621, 2016.
- C. Vondrick, H. Pirsiavash, and A. Torralba, "Anticipating Visual Representations with Unlabeled Video," CVPR, pp. 98-106, 2016.
- T. Cour, C. Jordan, E. Miltsakaki, and B. Taskar, "Movie/script: Alignment and Parsing of Video and Text Transcription," ECCV, 2008.
- Y. Zhu, R. Kiros, R. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler, "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books," ICCV, pp. 19-27, 2015.
- D. P. Kingma, J. Ba, "Adam: A method for stochastic optimization," ICLR, 2015.