Fig. 1. Architecture of SumaRuNNer
Fig. 2. Test of Validation
Table 1. Linguistic Analysis Features
Table 2. Number Documents IN CNN/Dailymail
Table 3. Results of Full-Length f1
Table 4. Example of Summary
References
- R. Nallapati, et al., Abstractive textsummarization using sequence-to-sequence rnns and beyond. arXiv preprintarXiv: 1602.06023, 2016.
- R. Nallapati, F. Zhai, and B. Zhou, "Summarunner: A recurrent neural network based sequence model for extractive summarization ofdocuments," in Thirty-First AAAI Conference on Artificial Intelligence, 2017.
- I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," in Advances in Neural Information Processing Systems, 2014.
- A. Jadhav and V. Rajan, "Extractive Summarizationwith SWAP-NET: Sentences and Words from Alternating Pointer Networks," in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2018.
- A. Nenkova, and L. Vanderwende, "The impact offrequency on summarization," Microsoft Research, Redmond, Washington, Tech. Rep.MSR-TR-2005, 2005. 101.
- E. Filatova and V. Hatzivassiloglou, "Event-based extractive summarization," 2004.
- G. Erkan, and D. R. Radev, "Lexrank: Graph-basedlexical centrality as salience in text summarization," Journal of Artificial Intelligence Research, Vol.22, pp.457-479, 2004. https://doi.org/10.1613/jair.1523
- D. R. Radev, et al., "Centroid-based summarizationof multiple documents," Information Processing & Management, Vol.40, No.6, pp.919-938, 2004. https://doi.org/10.1016/j.ipm.2003.10.006
- R. McDonald, "A study of global inferencealgorithms in multi-document summarization," in European Conference onInformation Retrieval, 2007. Springer.
- D. Shen, et al., "Document summarization usingconditional random fields," IJCAI, Vol.7, pp.2862-2867, 2007.
- H. P. Edmundson, "New methods in automaticextracting," Journal of the ACM (JACM), Vol.16, No.2, pp.264-285, 1969. https://doi.org/10.1145/321510.321519
- J. Cheng and M. Lapata, "Neural summarization byextracting sentences and words," arXiv preprint arXiv:1603.07252, 2016.
- K. Cho, et al., "Learning phrase representationsusing RNN encoder-decoder for statistical machine translation," arXiv preprintarXiv:1406.1078, 2014.
- Y. Bengio, et al., "A neural probabilisticlanguage model," Journal of Machine Learning Research, Vol.3(Feb), pp.1137-1155, 2003.
- T. Mikolov, et al., "Efficient estimation of wordrepresentations in vector space," arXiv preprint arXiv:1301.3781, 2013.
- S. Menaka and N. Radha, "Text classificationusing keyword extraction technique," International Journal of Advanced Researchin Computer Science and Software Engineering, Vol.3, No.12, 2013.
- A. Hulth, "Improved automatic keyword extractiongiven more linguistic knowledge," in Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, 2003. Association for Computational Linguistics.
- M. Wu, et al., "Event-based summarization usingtime features," in International Conference on Intelligent Text Processing and Computational Linguistics, 2007. Springer.
- W. Li, et al., "Extractive summarization usinginter-and intra-event relevance," in Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of theAssociation for Computational Linguistics, 2006. Association for Computational Linguistics.
- K. M. Hermann, et al., "Teaching machines to readand comprehend," in Advances in Neural Information Processing Systems, 2015.
- C. Manning, et al., "The Stanford CoreNLP naturallanguage processing toolkit," in Proceedings of 52nd annual meeting of theassociation for computational linguistics: system demonstrations, 2014.
- D. P. Kingma and J. Ba, "Adam: A method forstochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
- C.-Y. Lin, "Rouge: A package for automaticevaluation of summaries," Text Summarization Branches Out, pp.74-81, 2004.