Browse > Article
http://dx.doi.org/10.15207/JKCS.2021.12.11.035

Study on Zero-shot based Quality Estimation  

Eo, Sugyeong (Combined Student, Department of Computer Science and Engineering, Korea University)
Park, Chanjun (Combined Student, Department of Computer Science and Engineering, Korea University)
Seo, Jaehyung (Combined Student, Department of Computer Science and Engineering, Korea University)
Moon, Hyeonseok (Combined Student, Department of Computer Science and Engineering, Korea University)
Lim, Heuiseok (Combined Student, Department of Computer Science and Engineering, Korea University)
Publication Information
Journal of the Korea Convergence Society / v.12, no.11, 2021 , pp. 35-43 More about this Journal
Abstract
Recently, there has been a growing interest in zero-shot cross-lingual transfer, which leverages cross-lingual language models (CLLMs) to perform downstream tasks that are not trained in a specific language. In this paper, we point out the limitations of the data-centric aspect of quality estimation (QE), and perform zero-shot cross-lingual transfer even in environments where it is difficult to construct QE data. Few studies have dealt with zero-shots in QE, and after fine-tuning the English-German QE dataset, we perform zero-shot transfer leveraging CLLMs. We conduct comparative analysis between various CLLMs. We also perform zero-shot transfer on language pairs with different sized resources and analyze results based on the linguistic characteristics of each language. Experimental results showed the highest performance in multilingual BART and multillingual BERT, and we induced QE to be performed even when QE learning for a specific language pair was not performed at all.
Keywords
Quality estimation; Neural machine translation; Zero-shot; Language convergence; Natural language processing;
Citations & Related Records
연도 인용수 순위
  • Reference
1 C. Park, S. Eo, H. Moon & H. S. Lim. (2021). Should we find another model?: Improving Neural Machine Translation Performance with ONE-Piece Tokenization Method without Model Modification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers, 97-104. DOI : 10.18653/v1/2021.naacl-industry.13   DOI
2 C. Park, J. Seo, S. Lee, C. Lee, H. Moon, S. Eo & H. Lim. (2021). BTS: Back TranScription for Speech-to-Text Post-Processor using Text-to-Speech-to-Text. In Proceedings of the 8th Workshop on Asian Translation, 106-116. DOI : 10.18653/v1/2021.wat-1.10   DOI
3 C. Park, Y. Lee, C. Lee & H. Lim. (2020). Quality, not quantity?: Effect of parallel corpus quantity and quality on neural machine translation. In The 32st Annual Conference on Human Cognitive Language Technology, 363-368.
4 C. Park, Y. Yang, K. Park & H. Lim. (2020). Decoding strategies for improving low-resource machine translation. Electronics, 9(10), 1562. DOI : 10.3390/electronics9101562   DOI
5 H. Kim, J. H. Lee & S. H. Na. (2017). Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation, 562-568. DOI : 10.18653/v1/W17-4763   DOI
6 G. Chen et al. (2021). Zero-shot Cross-lingual Transfer of Neural Machine Translation with Multilingual Pretrained Encoders. arXiv preprint arXiv:2104.08757.
7 Z. Chi, L. Dong, F. Wei, W. Wang, X. L. Mao & H. Huang. (2020). Cross-lingual natural language generation via pre-training. In Proceedings of the AAAI Conference on Artificial Intelligence, 7570-7577. DOI : 10.1609/aaai.v34i05.6256   DOI
8 D. Lee. (2020). Cross-lingual transformers for neural automatic post-editing. In Proceedings of the Fifth Conference on Machine Translation, 772-776.
9 K. Shah, T. Cohn & L. Specia. (2015). A bayesian non-linear method for feature selection in machine translation quality estimation. Machine Translation, 29(2), 101-125. DOI : 10.1007/s10590-014-9164-x   DOI
10 R. Soricut, N. Bach & Z. Wang. (2012). The SDL language weaver systems in the WMT12 quality estimation shared task. In Proceedings of the Seventh Workshop on Statistical Machine Translation, 145-151.
11 T. Pires, E. Schlinger & D. Garrette. (2019). How multilingual is multilingual BERT?. arXiv preprint arXiv:1906.01502. DOI : 10.18653/v1/p19-1493   DOI
12 L. Specia, K. Shah, J. G. De Souza & T. Cohn (2013). QuEst-A translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 79-84.
13 K. Papineni, S. Roukos, T. Ward & W. J. Zhu. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, 311-318. DOI : 10.3115/1073083.1073135   DOI
14 T. Ranasinghe, C. Orasan & R. Mitkov. (2020). TransQuest at WMT2020: Sentence-Level Direct Assessment. arXiv preprint arXiv:2010.05318.
15 S. Banerjee & A. Lavie. (2005). METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, 65-72. DOI : 10.3115/1626355.1626389   DOI
16 S. Eo, C. Park, H. Moon, J. Seo & H. Lim. (2021). Dealing with the Paradox of Quality Estimation. In Proceedings of the 4rd Workshop on Technologies for MT of Low Resource Languages, 1-10.
17 L. Specia, D. Raj & M. Turchi (2010). Machine translation evaluation versus quality estimation. Machine translation, 24(1), 39-50. DOI : 10.1007/s10590-010-9077-2   DOI
18 H. Kim, J. H. Lim, H. K. Kim & S. H. Na. (2019). QE BERT: bilingual BERT using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (3), 85-89. DOI : 10.18653/v1/W19-5407   DOI
19 M. Wang et al. (2020). Hw-tsc's participation at wmt 2020 quality estimation shared task. In Proceedings of the Fifth Conference on Machine Translation, 1056-1061.
20 R. N. Patel. (2016). Translation quality estimation using recurrent neural network. arXiv preprint arXiv:1610.04841. DOI : 10.18653/v1/W16-2389   DOI
21 S. Eo, C. Park, H. Moon, J. Seo & H. Lim. (2021). Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation. Applied Sciences, 11(14), 6584. DOI : 10.3390/app11146584   DOI
22 J. Devlin, M. W. Chang, K. Lee & K. Toutanova. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. DOI : 10.18653/v1/N19-1423   DOI
23 Z. Chi, L. Dong, S. Ma, S. H. X. L. Mao, H. Huang & F. Wei. (2021). mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs. arXiv preprint arXiv:2104.08692.
24 Y. Liu et al. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742.   DOI
25 J. Hu, S. Ruder, A. Siddhant, G. Neubig, O. Firat & M. Johnson. (2020, November). Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In International Conference on Machine Learning, 4411-4421.
26 G. Campagna, A. Foryciarz, M. Moradshahi & M. S. Lam. (2020). Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. arXiv preprint arXiv:2005.00891. DOI : 10.18653/v1/2020.acl-main.12   DOI
27 A. Lauscher, V. Ravishankar, I. Vulic & G. Glavas, (2020). From zero to hero: On the limitations of zero-shot cross-lingual transfer with multilingual transformers. arXiv preprint arXiv:2005.00633. DOI : 10.18653/v1/2020.emnlp-main.363   DOI
28 A. Conneau et al. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. DOI : 10.18653/v1/P19-4007   DOI
29 L. Zhou, L. Ding & K. Takeda. (2020). Zero-shot translation quality estimation with explicit cross-lingual patterns. arXiv preprint arXiv:2010.04989.
30 S. Eo, C. Park, H. Moon, J. Seo & H. Lim. (2021). Research on Recent Quality Estimation. Journal of the Korea Convergence Society, 12(7), 37-44.   DOI
31 T. Wolf et al. (2019). Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
32 G. Lample & A. Conneau. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
33 H. Moon, C. Park, S. Eo, J. Park & H. Lim. (2021). Filter-mBART Based Neural Machine Translation Using Parallel Corpus Filtering. Journal of the Korea Convergence Society, 12(5), 1-7. DOI : 10.15207/JKCS.2021.12.5.001   DOI