Research on Recent Quality Estimation |
Eo, Sugyeong
(Department of Computer Science and Engineering, Korea University)
Park, Chanjun (Department of Computer Science and Engineering, Korea University) Moon, Hyeonseok (Department of Computer Science and Engineering, Korea University) Seo, Jaehyung (Department of Computer Science and Engineering, Korea University) Lim, Heuiseok (Department of Computer Science and Engineering, Korea University) |
1 | A. Vaswani et al. (2017). Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008). |
2 | F. Kepler et al. (2019). Unbabel's Participation in the WMT19 Translation Quality Estimation Shared Task. arXiv preprint arXiv:1907.10352. DOI : 10.18653/v1/W19-5406 DOI |
3 | H. Kim, J. H. Lim, H. K. Kim & S. H. Na. (2019). QE BERT: bilingual BERT using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2). (pp. 85-89). DOI : 10.18653/v1/W19-5407 DOI |
4 | T. Ranasinghe, C. Orasan & R. Mitkov. (2020). TransQuest at WMT2020: Sentence-Level Direct Assessment. arXiv preprint arXiv:2010.05318. |
5 | N. Q. Luong, B. Lecouteux & L. Besacier. (2013). LIG system for WMT13 QE task: Investigating the usefulness of features in word confidence estimation for MT. In 8th Workshop on Statistical Machine Translation. (pp. 386-391). |
6 | C. Hardmeier, J. Nivre & J. Tiedemann. (2012). Tree kernels for machine translation quality estimation. In Seventh Workshop on Statistical Machine Translation, Montreal, Canada, June 7-8, 2012. (pp. 109-113). Association for Computational Linguistics. |
7 | E. Fonseca, L. Yankovskaya, A. F. Martins, M. Fishel & C. Federmann. (2019). Findings of the WMT 2019 shared tasks on quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), 1-10. DOI : 10.18653/v1/W19-5401 DOI |
8 | T. Wolf et al. (2019). HuggingFace's Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. |
9 | K. Cho et al. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. DOI : 10.3115/v1/d14-1179 DOI |
10 | C. Park & H. Lim. (2020). A Study on the Performance Improvement of Machine Translation Using Public Korean-English Parallel Corpus. Journal of Digital Convergence, 18(6), 271-277. DOI |
11 | G. Lample & A. Conneau. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291. |
12 | L. Specia, C. Scarton & G. H. Paetzold (2018). Quality estimation for machine translation. Synthesis Lectures on Human Language Technologies, 11(1), 1-162. DOI : 10.2200/S00854ED1V01Y201805HLT039 DOI |
13 | L. Specia, D. Raj & M. Turchi (2010). Machine translation evaluation versus quality estimation. Machine translation, 24(1), 39-50. DOI : 10.1007/s10590-010-9077-2 DOI |
14 | Y. Baek, Z. M. Kim, J. Moon, H. Kim & E. Park. (2020). Patquest: Papago translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation. (pp. 991-998). |
15 | A. Conneau et al. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. DOI : 10.18653/v1/P19-4007 DOI |
16 | Y. Liu et al. (2020). Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8, 726-742. DOI |
17 | R. Soricut, N. Bach & Z. Wang. (2012). The SDL language weaver systems in the WMT12 quality estimation shared task. In Proceedings of the Seventh Workshop on Statistical Machine Translation. (pp. 145-151). |
18 | S. Hochreiter & J. Schmidhuber. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. DOI : 10.1162/neco.1997.9.8.1735 DOI |
19 | R. N. Patel. (2016). Translation quality estimation using recurrent neural network. arXiv preprint arXiv:1610.04841. DOI : 10.18653/v1/W16-2389 DOI |
20 | H. Kim, J. H. Lee & S. H. Na. (2017, September). Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. In Proceedings of the Second Conference on Machine Translation. (pp. 562-568). DOI : 10.18653/v1/w17-4763 DOI |
21 | H. Kim & J. H. Lee. (2016). Recurrent neural network based translation quality estimation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers. (pp. 787-792). DOI : 10.18653/v1/w16-2384 DOI |
22 | M. Wang et al. (2020, November). Hw-tsc's participation at wmt 2020 quality estimation shared task. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1056-1061). |
23 | H. Wu et al. (2020, November). Tencent submission for WMT20 Quality Estimation Shared Task. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1062-1067). |
24 | M. Snover, B. Dorr, R. Schwartz, L. Micciulla & J. Makhoul. (2006, August). A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas (Vol. 200, No. 6). |
25 | D. Lee. (2020). Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1024-1028). |
26 | J. Wang, K. Fan, B. Li, F. Zhou, B. Chen, Y. Shi & L. Si. (2018). Alibaba submission for WMT18 quality estimation task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers. (pp. 809-815). DOI : 10.18653/v1/w18-6465 DOI |
27 | L. Specia, F. Blain, V. Logacheva, R. Astudillo & A. Martins. (2018). Findings of the wmt 2018 shared task on quality estimation. Association for Computational Linguistics. DOI : 10.18653/v1/W18-6451 DOI |
28 | G. Wenzek et al. (2019). Ccnet: Extracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359. |
29 | T. Pires, E. Schlinger & D. Garrette. (2019). How multilingual is multilingual bert?. arXiv preprint arXiv:1906.01502. DOI : 10.18653/v1/p19-1493 DOI |
30 | M. Lewis et al. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. DOI : 10.18653/v1/2020.acl-main.703 DOI |
31 | C. Park, Y. Yang, K. Park & H. Lim. (2020). Decoding strategies for improving low-resource machine translation. Electronics, 9(10), 1562. DOI |
32 | D. Lee. (2020). Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. In Proceedings of the Fifth Conference on Machine Translation. (pp. 1024-1028). |
33 | L. Specia, K. Shah, J. G. De Souza & T. Cohn (2013). QuEst-A translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 79-84. |
34 | J. Devlin, M. W. Chang, K. Lee & K. Toutanova. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. DOI : 10.18653/v1/N19-1423 |
35 | E. Bicici. & A. Way. (2014). Referential translation machines for predicting translation quality. Association for Computational Linguistics. DOI : 10.18653/v1/w15-3035 DOI |