DOI QR코드

DOI QR Code

한국어 단어 및 문장 분류 태스크를 위한 분절 전략의 효과성 연구

A Comparative study on the Effectiveness of Segmentation Strategies for Korean Word and Sentence Classification tasks

  • 김진성 (고려대학교 컴퓨터학과) ;
  • 김경민 (고려대학교 컴퓨터학과) ;
  • 손준영 (고려대학교 컴퓨터학과) ;
  • 박정배 (고려대학교 Human-inspired AI연구소) ;
  • 임희석 (고려대학교 컴퓨터학과)
  • Kim, Jin-Sung (Department of Computer Science and Engineering, Korea University) ;
  • Kim, Gyeong-min (Department of Computer Science and Engineering, Korea University) ;
  • Son, Jun-young (Department of Computer Science and Engineering, Korea University) ;
  • Park, Jeongbae (Human-inspired Computing Research Center, Korea University) ;
  • Lim, Heui-seok (Department of Computer Science and Engineering, Korea University)
  • 투고 : 2021.10.25
  • 심사 : 2021.12.20
  • 발행 : 2021.12.28

초록

효과적인 분절을 통한 양질의 입력 자질 구성은 언어모델의 문장 이해력을 향상하기 위한 필수적인 단계이다. 입력 자질의 품질 제고는 세부 태스크의 성능과 직결된다. 본 논문은 단어와 문장 분류 관점에서 한국어의 언어적 특징을 효과적으로 반영하는 분절 전략을 비교 연구한다. 분절 유형은 언어학적 단위에 따라 어절, 형태소, 음절, 자모 네 가지로 분류하며, RoBERTa 모델 구조를 활용하여 사전학습을 진행한다. 각 세부 태스크를 분류 단위에 따라 문장 분류 그룹과 단어 분류 그룹으로 구분 지어 실험함으로써, 그룹 내 경향성 및 그룹 간 차이에 대한 분석을 진행한다. 실험 결과에 따르면, 문장 분류에서는 단위의 언어학적 분절 전략을 적용한 모델이 타 분절 전략 대비 최대 NSMC: +0.62%, KorNLI: +2.38%, KorSTS: +2.41% 높은 성능을, 단어 분류에서는 음절 단위의 분절 전략이 최대 NER: +0.7%, SRL: +0.61% 높은 성능을 보임으로써, 각 분류 그룹에서의 효과성을 보여준다.

The construction of high-quality input features through effective segmentation is essential for increasing the sentence comprehension of a language model. Improving the quality of them directly affects the performance of the downstream task. This paper comparatively studies the segmentation that effectively reflects the linguistic characteristics of Korean regarding word and sentence classification. The segmentation types are defined in four categories: eojeol, morpheme, syllable and subchar, and pre-training is carried out using the RoBERTa model structure. By dividing tasks into a sentence group and a word group, we analyze the tendency within a group and the difference between the groups. By the model with subchar-level segmentation showing higher performance than other strategies by maximal NSMC: +0.62%, KorNLI: +2.38%, KorSTS: +2.41% in sentence classification, and the model with syllable-level showing higher performance at maximum NER: +0.7%, SRL: +0.61% in word classification, the experimental results confirm the effectiveness of those schemes.

키워드

과제정보

This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2018-0-01405) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation) and This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2021R1A6A1A03045425).

참고문헌

  1. Y. Liu. et al. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  2. J. Devlin, M. W. Chang, K. Lee & K. Toutanova. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  3. R. Sennrich, B. Haddow & A. Birch. (2015). Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. DOI : 10.18653/v1/P16-1162
  4. Y. Wu. et al. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
  5. T. Kudo. & J. Richardson. (2018). Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.
  6. M. Kim., Y. Kim., Y. Lim. & E. N. Huh. (2019, July). Advanced subword segmentation and interdependent regularization mechanisms for korean language understanding. In 2019 Third World Conference on Smart Trends in Systems Security and Sustainablity (WorldS4) (pp. 221-227). London : UK. DOI : 10.1109/WorldS4.2019.8903977
  7. O. Kwon, D. Kim, S. R. Lee, J. Choi & S. Lee. (2021, April). Handling Out-Of-Vocabulary Problem in Hangeul Word Embeddings. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume (pp. 3213-3221). DOI : 10.18653/v1/2021.eacl-main.280
  8. S. Park, J. Byun, S. Baek, Y. Cho & A. Oh. (2018, July). Subword-level word vector representations for Korean. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 2429-2438). DOI : 10.18653/v1/P18-1226
  9. S. Lee, H. Jang, Y. Baik, S. Park & H. Shin. (2020). Kr-bert: A small-scale korean-specific language model. arXiv preprint arXiv:2008.03979. DOI : 10.5626/jok.2020.47.7.682
  10. A. Matteson, C. Lee, Y. Kim & H. S. Lim. (2018, August). Rich character-level information for Korean morphological analysis and part-of-speech tagging. In Proceedings of the 27th International Conference on Computational Linguistics (pp. 2482-2492).
  11. S. Moon. & N. Okazaki. (2020, May). Jamo Pair Encoding: Subcharacter Representation-based Extreme Korean Vocabulary Compression for Efficient Subword Tokenization. In Proceedings of the 12th Language Resources and Evaluation Conference (pp. 3490-3497). Marseille : France.
  12. D. B. Cho, H. Y. Lee & S. S. Kang. (2021). An Empirical Study of Korean Sentence Representation with Various Tokenizations. Electronics, 10(7), 845. DOI : 10.3390/electronics10070845
  13. K. Park, J. Lee, S. Jang & D. Jung. (2020). An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks. arXiv preprint arXiv:2010.02534.
  14. T. Kudo. (2018). Subword regularization: Improving neural network translation models with multiple subword candidates. arXiv preprint arXiv:1804.10959. DOI : 10.18653/v1/P18-1007
  15. E. F. Sang & F. De Meulder. (2003). Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050.
  16. S. Park. et al. (2021). KLUE: Korean Language Understanding Evaluation. arXiv preprint arXiv:2105.09680.
  17. J. Ham, Y. J. Choe, K. Park, I. Choi & H. Soh. (2020). Kornli and korsts: New benchmark datasets for korean natural language understanding. arXiv preprint arXiv:2004.03289. DOI : 10.18653/v1/2020.findings-emnlp.39
  18. M. Ott et al. (2019). fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038. DOI : 10.18653/v1/N19-4009