DOI QR코드

DOI QR Code

한국어 인공신경망 기계번역의 서브 워드 분절 연구 및 음절 기반 종성 분리 토큰화 제안

Research on Subword Tokenization of Korean Neural Machine Translation and Proposal for Tokenization Method to Separate Jongsung from Syllables

  • Eo, Sugyeong (Department of Computer Science and Engineering, Korea University) ;
  • Park, Chanjun (Department of Computer Science and Engineering, Korea University) ;
  • Moon, Hyeonseok (Department of Computer Science and Engineering, Korea University) ;
  • Lim, Heuiseok (Department of Computer Science and Engineering, Korea University)
  • 투고 : 2021.01.13
  • 심사 : 2021.03.20
  • 발행 : 2021.03.28

초록

인공신경망 기계번역(Neural Machine Translation, NMT)은 한정된 개수의 단어만을 번역에 이용하기 때문에 사전에 등록되지 않은 단어들이 입력으로 들어올 가능성이 있다. 이러한 Out of Vocabulary(OOV) 문제를 완화하고자 고안된 방법이 서브 워드 분절(Subword Tokenization)이며, 이는 문장을 단어보다 더 작은 서브 워드 단위로 분할하여 단어를 구성하는 방법론이다. 본 논문에서는 일반적인 서브 워드 분절 알고리즘들을 다루며, 나아가 한국어의 무한한 용언 활용을 잘 다룰 수 있는 사전을 만들기 위해 한국어의 음절 중 종성을 분리하여 서브 워드 분절을 학습하는 새로운 방법론을 제안한다. 실험결과 본 논문에서 제안하는 방법론이 기존의 서브 워드 분리 방법론보다 높은 성능을 거두었다.

Since Neural Machine Translation (NMT) uses only a limited number of words, there is a possibility that words that are not registered in the dictionary will be entered as input. The proposed method to alleviate this Out of Vocabulary (OOV) problem is Subword Tokenization, which is a methodology for constructing words by dividing sentences into subword units smaller than words. In this paper, we deal with general subword tokenization algorithms. Furthermore, in order to create a vocabulary that can handle the infinite conjugation of Korean adjectives and verbs, we propose a new methodology for subword tokenization training by separating the Jongsung(coda) from Korean syllables (consisting of Chosung-onset, Jungsung-neucleus and Jongsung-coda). As a result of the experiment, the methodology proposed in this paper outperforms the existing subword tokenization methodology.

키워드

참고문헌

  1. C. Park, Y. Yang, K. Park & H. Lim. (2020). Decoding strategies for improving low-resource machine translation. Electronics, 9(10), 1562. DOI : 10.3390/electronics9101562
  2. R. Sennrich, B. Haddow & A. Birch. (2015). Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. DOI : 10.18653/v1/P16-1162
  3. T. Kudo. (2018). Subword regularization: Improving neural network translation models with multiple subword candidates. arXiv preprint arXiv:1804.10959. DOI : 10.18653/v1/P18-1007
  4. I. Provilkov, D. Emelianenko & E. Voita. (2019). Bpe-dropout: Simple and effective subword regularization. arXiv preprint arXiv:1910.13267. DOI : 10.18653/v1/2020.acl-main.170
  5. M. Schuster & K. Nakajima. (2012, March). Japanese and korean voice search. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5149-5152. DOI : 10.1109/ICASSP.2012.6289079
  6. Y. Wu et al. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
  7. T. Kudo & J. Richardson. (2018). Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226. DOI : 10.18653/v1/D18-2012
  8. K. Stratos. (2017). A sub-character architecture for Korean language processing. arXiv preprint arXiv:1707.06341. DOI : 10.18653/v1/D17-1075
  9. S. Moon & N. Okazaki. (2020, May). Jamo Pair Encoding: Subcharacter Representation-based Extreme Korean Vocabulary Compression for Efficient Subword Tokenization. In Proceedings of The 12th Language Resources and Evaluation Conference, 3490-3497.
  10. C. Park, C. Lee, Y. Yang & H. Lim. (2020). Ancient Korean Neural Machine Translation. IEEE Access, 8, 116617-116625. DOI : 10.1109/ACCESS.2020.3004879
  11. K. Park, J. Lee, S. Jang & D. Jung. (2020). An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks. arXiv preprint arXiv:2010.02534.
  12. P. Lison & J. Tiedemann. (2016). Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles.
  13. C. Park & H. Lim. (2020). A Study on the Performance Improvement of Machine Translation Using Public Korean-English Parallel Corpus. Journal of Digital Convergence, 18(6), 271-277. DOI : 10.14400/JDC.2020.18.6.271
  14. A. Vaswani et al. (2017). Attention is all you need. In Advances in neural information processing systems, 5998-6008.
  15. K. Papineni, S. Roukos, T. Ward & W. J. Zhu. (2002, July). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 311-318. DOI : 10.3115/1073083.1073135