References
- P. Rajpurkar et al., SQuAD: 100,000+ questions for machine comprehension of text, arXiv preprint arXiv:1606.05250, 2016.
- F. Hill et al., The goldilocks principle: reading children's books with explicit memory representations, arXiv preprint arXiv:1511.02301, 2015.
- T. Nguyen et al., MS MARCO: A human generated machine reading comprehension dataset, arXiv preprint arXiv:1611.09268, 2016.
- D. Chen et al., Reading Wikipedia to answer open-domain questions, arXiv preprint arXiv:1704.00051, 2017.
- D. Weissenborn, G. Wiese, and L. Seiffe, Making neural QA as simple as possible but not simpler, in Proc. 21st Conf. Comput. Nat. Lang. Learning (CoNLL 2017), Vancouver, Canada, 2017.
- W. Wang et al., Gated self‐matching networks for reading comprehension and question answering, in Proc. 55th Annu. Meeting Assoc. Comput. Linguistics, Vancouver, Canada, July 2017, pp. 189-198.
- Y. Cui et al., Attention-over-attention neural networks for reading comprehension, arXiv preprint arXiv:1607.04423, 2016.
- M. Seo et al., Bidirectional attention flow for machine comprehension, arXiv preprint arXiv:1611.01603, 2016.
- S. Wang and J. Jiang, Machine comprehension using match-LSTM and answer pointer, arXiv preprint arXiv:1608.07905, 2016.
- O. Vinyals, M. Fortunato, and N. Jaitly, Pointer networks, in Adv. Neural Inform. Process. Syst., Montreal, Canada, 2015, pp. 2674-2682.
- D. Bahdanau et al., Neural machine translation by jointly learning to align and translate, Proc. ICLR '15, arXiv:1409.0473, 2015.
- K. Cho et al., Learning phrase representation using RNN encoder-decoder for statistical machine translation, in Proc. EMNLP '14, Doha, Qatar, Oct. 25-29, 2014.
- S. Hochreiter and J. Schmidhuber, Long short‐term memory, Neural Comput. 9 (1997), no. 8, 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
- T. Lei and Y. Zhang, Training RNNs as fast as CNNs, arXiv preprint arXiv:1709.02755, 2017.
- C. Lee, J. Kim, and J. Kim, Korean dependency parsing using deep learning, in Proc. KIISE HCLT, 2014, pp. 87-91(in Korean).
- Y. Kim, Convolutional neural networks for sentence classification, in Proc. EMNLP '14, Doha, Qatar, Oct. 25-29, 2014.
- D. Kingma and J. Ba. Adam, A method for stochastic optimization, arXiv preprint arXiv:1412.6980, 2014.
- K. Lee et al., Learning recurrent span representations for extractive question answering, arXiv:1611.01436, 2017.
- Z. Wang et al., Multi‐perspective context matching for machine comprehension, arXiv:1612.04211, 2016.
- Z. Chen et al., Smarnet: Teaching machines to read and comprehend like human, arXiv:1710.02772, 2017.
- J. Pennington, R. Socher, and C. Manning, Glove: Global vectors for word representation, in Proc. EMNLP '14, Doha, Qatar, Onct. 25-29, 2014, pp. 1532-1543.
- M. E. Peters et al., Deep contextualized word presentations, Int. Conf. Learning Representations, 2018. https://openreview.net/pdf?xml:id=S1p31z-Ab.
Cited by
- Spatio-temporal SRU with global context-aware attention for 3D human action recognition vol.79, pp.17, 2019, https://doi.org/10.1007/s11042-019-08587-w
- S 3 -NET : SRU-Based Sentence and Self-Matching Networks for Machine Reading Comprehension vol.19, pp.3, 2019, https://doi.org/10.1145/3365679
- An Overview of Utilizing Knowledge Bases in Neural Networks for Question Answering vol.22, pp.5, 2019, https://doi.org/10.1007/s10796-020-10035-2
- Korean TableQA: Structured data question answering based on span prediction style with S3-NET vol.42, pp.6, 2019, https://doi.org/10.4218/etrij.2019-0189
- Analysis of English Multitext Reading Comprehension Model Based on Deep Belief Neural Network vol.2021, 2019, https://doi.org/10.1155/2021/5100809