DOI QR코드

DOI QR Code

생성 모델과 검색 모델을 이용한 한국어 멀티턴 응답 생성 연구

A study on Korean multi-turn response generation using generative and retrieval model

  • 이호동 (삼성 에스디에스 주식회사) ;
  • 이종민 (삼성 에스디에스 주식회사) ;
  • 서재형 (고려대학교 컴퓨터학과) ;
  • 장윤나 (고려대학교 컴퓨터학과) ;
  • 임희석 (고려대학교 컴퓨터학과)
  • Lee, Hodong (AI BUD, Samsung SDS) ;
  • Lee, Jongmin (AI BUD, Samsung SDS) ;
  • Seo, Jaehyung (Department of Computer Science and Engineering, Korea University) ;
  • Jang, Yoonna (Department of Computer Science and Engineering, Korea University) ;
  • Lim, Heuiseok (Department of Computer Science and Engineering, Korea University)
  • 투고 : 2021.10.15
  • 심사 : 2022.01.20
  • 발행 : 2022.01.28

초록

최근 딥러닝 기반의 자연어처리 연구는 사전 훈련된 언어 모델을 통해 대부분의 자연어처리 분야에서 우수한 성능을 보인다. 특히 오토인코더 (auto-encoder) 기반의 언어 모델은 다양한 한국어 이해 분야에서 뛰어난 성능과 쓰임을 증명하고 있다. 그러나 여전히 디코더 (decoder) 기반의 한국어 생성 모델은 간단한 문장 생성 과제에도 어려움을 겪고 있으며, 생성 모델이 가장 일반적으로 쓰이는 대화 분야에서의 세부 연구와 학습 가능한 데이터가 부족한 상황이다. 따라서 본 논문은 한국어 생성 모델을 위한 멀티턴 대화 데이터를 구축하고 전이 학습을 통해 생성 모델의 대화 능력을 개선하여 성능을 비교 분석한다. 또한, 검색 모델을 통해 외부 지식 정보에서 추천 응답 후보군을 추출하여 모델의 부족한 대화 생성 능력을 보완하는 방법을 제안한다.

Recent deep learning-based research shows excellent performance in most natural language processing (NLP) fields with pre-trained language models. In particular, the auto-encoder-based language model proves its excellent performance and usefulness in various fields of Korean language understanding. However, the decoder-based Korean generative model even suffers from generating simple sentences. Also, there is few detailed research and data for the field of conversation where generative models are most commonly utilized. Therefore, this paper constructs multi-turn dialogue data for a Korean generative model. In addition, we compare and analyze the performance by improving the dialogue ability of the generative model through transfer learning. In addition, we propose a method of supplementing the insufficient dialogue generation ability of the model by extracting recommended response candidates from external knowledge information through a retrival model.

키워드

참고문헌

  1. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez... & I. Polosukhin. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008). DOI : 10.5555/3295222.3295349
  2. J. A. Bernard. (1988). Use of a rule-based system for process control. IEEE Control Systems Magazine, 8(5), 3-13. DOI : 10.1109/37.7735
  3. C. Manning. & H. Schutze. (1999). Foundations of statistical natural language processing. MIT press. DOI : 10.5555/311445
  4. J. Devlin, M. W. Chang, K. Lee. & K. Toutanova. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 4171-4186). DOI : 10.18653/v1/N19-1423
  5. Y. Liu et al. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  6. K. Clark, M. T. Luong, Q. V. Le & C. D. Manning. (2020). ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. In International Conference on Learning Representations. URL : https://openreview.net/forum?id=r1xMH1BtvB
  7. S. Park, J. Moon, S. Kim, W. I. Cho, J. Han, J. Park ... & K. C. A. O. J. H. K. Cho. (2021). KLUE: Korean Language Understanding Evaluation. arXiv preprint arXiv:2105.09680.
  8. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei. & I. Sutskever. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
  9. T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, ... & D. Amodei. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
  10. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, ... & L. Zettlemoyer. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
  11. C. Park., J. Seo, S. Lee, C. Lee, H. Moon, S. Eo & H. S. Lim. (2021, August). BTS: Back TranScription for speech-to-text post-processor using text-to-speech-to-text. In Proceedings of the 8th Workshop on Asian Translation (WAT2021) (pp. 106-116). DOI : 10.18653/v1/2021.wat-1.10
  12. K. Gregor, I. Danihelka, A. Mnih, C. Blundell & D. Wierstra. (2014, June). Deep autoregressive networks. In International Conference on Machine Learning (pp. 1242-1250). PMLR.
  13. H. Rashkin, E. M. Smith, M. Li & Y. L. Boureau. (2018). Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207.
  14. S. Robertson & H. Zaragoza. (2009). The Probabilistic Relevance Framework: BM25 and Beyond. Information Retrieval, 3(4), 333-389. DOI : 10.1561/1500000019
  15. S. Humeau, K. Shuster, M. A. Lachaux & J. Weston. (2019, September). Poly-encoders: Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. In International Conference on Learning Representations. URL : https://openreview.net/forum?id=SkxgnnNFvH
  16. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, ... & A. M. Rush. (2019). Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
  17. I. Loshchilov & F. Hutter. (2017). Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101.
  18. S. Wiseman & A. M. Rush. (2016, November). Sequence-to-Sequence Learning as Beam-Search Optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 1296-1306). DOI : 10.18653/v1/D16-1137
  19. S. Roller, E. Dinan, N. Goyal, D. Ju, M. Williamson, Y. Liu, ... & J. Weston. (2020). Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637.
  20. K. Papineni, S. Roukos, T. Ward & W. J. Zhu. (2002, July). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics (pp. 311-318). DOI : 10.3115/1073083.1073135
  21. C. Y. Lin. (2004, July). Rouge: A package for automatic evaluation of summaries. In Text summarization branches out (pp. 74-81). URL : https://aclanthology.org/W04-1013
  22. T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger & Y. Artzi. (2019, September). BERTScore: Evaluating Text Generation with BERT. In International Conference on Learning Representations. URL: https://openreview.net/forum?id=SkeHuCVFDr
  23. S. Gehrmann, T. Adewumi, K. Aggarwal, P. S. Ammanamanchi, A. Anuoluwapo, A. Bosselut, ... & J. Zhou. (2021). The gem benchmark: Natural language generation, its evaluation and metrics. arXiv preprint arXiv:2102.01672.
  24. T. Schick & H. Schutze. (2021, June). It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 2339-2352). DOI :10.18653/v1/2021.naacl-main.1815
  25. T. Gao, A. Fisch & D. Chen. (2020). Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723