DOI QR코드

DOI QR Code

Document Summarization Considering Entailment Relation between Sentences

문장 수반 관계를 고려한 문서 요약

  • 권영대 (성균관대학교 반도체시스템공학과) ;
  • 김누리 (성균관대학교 컴퓨터공학과) ;
  • 이지형 (성균관대학교 컴퓨터공학과)
  • Received : 2016.09.20
  • Accepted : 2016.11.14
  • Published : 2017.02.15

Abstract

Document summarization aims to generate a summary that is consistent and contains the highly related sentences in a document. In this study, we implemented for document summarization that extracts highly related sentences from a whole document by considering both similarities and entailment relations between sentences. Accordingly, we proposed a new algorithm, TextRank-NLI, which combines a Recurrent Neural Network based Natural Language Inference model and a Graph-based ranking algorithm used in single document extraction-based summarization task. In order to evaluate the performance of the new algorithm, we conducted experiments using the same datasets as used in TextRank algorithm. The results indicated that TextRank-NLI showed 2.3% improvement in performance, as compared to TextRank.

문서의 요약은 요약문 내의 문장들끼리 서로 연관성 있게 이어져야 하고 하나의 짜임새 있는 글이 되어야 한다. 본 논문에서는 위의 목적을 달성하기 위해 문장 간의 유사도와 수반 관계(Entailment)를 고려하여 문서 내에서 연관성이 크고 의미, 개념적인 연결성이 높은 문장들을 추출할 수 있도록 하였다. 본 논문에서는 Recurrent Neural Network 기반의 문장 관계 추론 모델과 그래프 기반의 랭킹(Graph-based ranking) 알고리즘을 혼합하여 단일 문서 추출요약 작업에 적용한 새로운 알고리즘인 TextRank-NLI를 제안한다. 새로운 알고리즘의 성능을 평가하기 위해 기존의 문서요약 알고리즘인 TextRank와 동일한 데이터 셋을 사용하여 성능을 비교 분석하였으며 기존의 알고리즘보다 약 2.3% 더 나은 성능을 보이는 것을 확인하였다.

Keywords

Acknowledgement

Supported by : 한국연구재단

References

  1. John Gantz, David Reinsel, The Digital Universe in 2020, [Online]. Available: https://www.emc.com/collateral/analyst-reports/idc-the-digital-universe-in-2020.pdf, 2012.
  2. Mihalcea, Rada, and Paul Tarau, "TextRank: Bringing order into texts," Association for Computational Linguistics, 2004.
  3. MacCartney, Bill, "Natural language inference," PhD diss., Stanford University, 2009..
  4. DUC. Document understanding conference 2002, [Online]. Available: http://www-nlpir.nist.gov/projects/duc/, 2002.
  5. Cheng, Jianpeng, Li Dong, and Mirella Lapata, "Long short-term memory-networks for machine reading," arXiv preprint arXiv:1601.06733, 2016.
  6. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning, "A large annotated corpus for learning natural language inference," Proc. of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2015.
  7. Pennington, Jeffrey, Richard Socher, and Christopher D. Manning, "Glove: Global Vectors for Word Representation," EMNLP, Vol. 14, pp, 1532-1543, 2014.
  8. Lin, Chin-Yew, and Eduard Hovy, "Automatic evaluation of summaries using n-gram co-occurrence statistics," Proc. of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Vol. 1, pp. 71-78, 2003.
  9. S.Brin and L.Page, "The anatomy of a large-scale hypertextual Web search engine," Computer Networks and ISDN Systems, Vol. 30, No. 1-7, pp. 107-117, 1998. https://doi.org/10.1016/S0169-7552(98)00110-X
  10. Ramos, Juan, "Using tf-idf to determine word relevance in document queries," Proc. of the first instructional conference on machine learning, 2003.
  11. Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean, "Distributed representations of words and phrases and their compositionality," Advances in neural information processing systems, pp. 3111-3119, 2013.