Browse > Article
http://dx.doi.org/10.3745/KTSDE.2021.10.8.319

Graph Reasoning and Context Fusion for Multi-Task, Multi-Hop Question Answering  

Lee, Sangui (경기대학교 컴퓨터과학과)
Kim, Incheol (경기대학교 컴퓨터과학과)
Publication Information
KIPS Transactions on Software and Data Engineering / v.10, no.8, 2021 , pp. 319-330 More about this Journal
Abstract
Recently, in the field of open domain natural language question answering, multi-task, multi-hop question answering has been studied extensively. In this paper, we propose a novel deep neural network model using hierarchical graphs to answer effectively such multi-task, multi-hop questions. The proposed model extracts different levels of contextual information from multiple paragraphs using hierarchical graphs and graph neural networks, and then utilize them to predict answer type, supporting sentences and answer spans simultaneously. Conducting experiments with the HotpotQA benchmark dataset, we show high performance and positive effects of the proposed model.
Keywords
Open Domain Question Answering; Multi-hop Reasoning; Multi-task Question; Hierarchical Graph; Graph Neural Network;
Citations & Related Records
연도 인용수 순위
  • Reference
1 J. Welbl, P. Stenetorp, and S. Riedel, "Constructing datasets for multi-hop reading comprehension across documents," Transactions of the Association for Computational Linguistics, Vol.6, pp.287-302, 2018.   DOI
2 Y. Liu, et al., "RoBERTa: A robustly optimized BERT pretraining approach," arxiv.org/abs/1907.11692, 2019.
3 T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space," In Proceedings of International Conference on Learning Representation, 2013.
4 J. Pennington, R. Socher, and C. Manning, "GloVe: Global vectors for word representation," In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, pp.1532-1543, 2014.
5 A. Vaswani, et al., "Attention is all you need," In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, pp.6000-6010, 2017.
6 K. Nishida, K. Nishida, M. Nagata, A. Otsuka, I. Saito, H. Asano, and J. Tomita, "Answering while summarizing: Multi-task learning for multi-hop QA with evidence extraction," In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp.2335-2345, 2019.
7 T. N. Kipf and M. Welling, "Semi-supervised classification with graph convolutional networks," In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017.
8 P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, "Graph attention networks," In Proceedings of the 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
9 M. Tu, K. Huang, G. Wang, J. Huang, X. He, and B. Zhou, "Select, answer and explain: Interpretable multi-hop reading comprehension over multiple documents," In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, USA, pp.9073-9080, 2020.
10 Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning, "HotpotQA: A dataset for diverse, explainable multi-hop question answering," In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, pp.2369-2380, 2018.
11 L. Qiu, Y. Xiao, Y. Qu, H. Zhou, L. Li, W. Zhang, and Y. Yu, "Dynamically fused graph network for multi-hop reasoning," In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp.6140-6150, 2019.
12 J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, pp.4171-4186, 2019.
13 M. Zhang, F. Li, Y. Wang, Z. Zhang, Y. Zhou, and X. Li, "Coarse and fine granularity graph reasoning for interpretable multi-hop question answering," IEEE Access, Vol.8, pp.56755-56765, 2020.   DOI
14 A. Asai, K. Hashimoto, H. Hajishirzi, R. Socher, and C. Xiong, "Learning to retrieve reasoning paths over Wikipedia graph for question answering," In Proceedings of International Conference on Learning Representation, 2020.
15 Y. Fang, S. Sun, Z. Gan, R. Pillai, S. Wang, and J. Liu, "Hierarchical graph network for multi-hop question answering," In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pp.8823-8838, 2020.
16 M. Tu, G. Wang, J. Huang, Y. Tang, X. He, and B. Zhou, "Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs," In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, pp.2704-2713, 2019.
17 N. D. Cao, W. Aziz, and I. Titov, "Question answering by reasoning across documents with graph convolutional networks," In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, pp.2306-2317, 2019.
18 Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, "ALBERT: A lite BERT for self-supervised learning of language representations," In Proceedings of International Conference on Learning Representation, 2020.
19 M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi, "Bidirectional attention flow for machine comprehension," In Proceedings of International Conference on Learning Representation, 2017.