Acknowledgement
This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2018-0-01405) supervised by the IITP(Institute for Information & Communications Technology Planning & Evaluation) and supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2021R1A6A1A03045425).
References
- M. H. Roh. (2011). Reading: the concept and important issues of education. KEDI Research Paper, 43(3), 1-43.
- Ministry of Education. (2015. January). Korean Language Curriculum. 2015 Revised Curriculum, 5, 1-178.
- S. H. Kim. (2014). Analysis of Students' Recognition of National Scholastic Aptitude Test for University Admission -With Focus on the 'Korean Language Section'-, Journal of CheongRam Korean Language Education, 49, 135-164.
- S. Y. Ryu. (2019). Critical Examination About CSAT Korean Language and Its Developmental Directions-Toward the Recovery of the Nature of the CSAT Evaluation-. New Language Education, 121, 353-380.
- R. Sennrich, B. Haddow & A. Birch. (2016). Improving Neural Machine Translation Models with Monolingual Data. Association for Computational Linguistics, 1, 86-96.
- A. Vaswani et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 6000-6010.
- D. Powers, D. Escoffery, & M. Duchnowski. (2015). Validating Automated Essay Scoring: A (Modest) Refinement of the "Gold Standard". Applied Measurement in Education, 28(2), 130-142. https://doi.org/10.1080/08957347.2014.1002920
- SKT Brain. (2019). KoBERT. Github Repository. https://github.com/SKTBRain/KoBERT
- K. Clark, M. Luong, Q. Le, & C. Manning. (2020). Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.
- J. W. Park. (2020). KoELECTRA: Pretrained ELECTRA Model for Korean. Github Repository. https://github.com/monologg/KoELECTRA
- J. B. Lee. (2021). KcELECTRA : Korean comments ELECTRA. Github Repository. https://github.com/Beomi/KcELECTRA
- J. Wei & K. Zou. (2019). EDA: Easy Data Augmentation Techniques for Boosting Performance. Association for Computational Linguistics, 1, 6382-6388. DOI : 10.18653/v1/D19-1670
- P. Rajpurkar, J. Zhang, K. Lopyrev & P. Liang. (2016). SQuAD: 100, 000+ Questions for Machine Comprehension of Text, EMNLP, 1, 2383-2392. DOI : 10.18653/v1/d16-1264
- Y. Y. Yang, S. W. Kang, J. Y. Seo. (2019). Improved Machine Reading Comprehension Using Data Validation for Weakly Labeled Data. IEEE, 8, 5667-5677. DOI : 10.1109/ACCESS.2019.2963569
- Y. R. Lee et al. (2009. July.). Analysis of SAT and ACT, Seoul : KICE.
- C. Fellbaum. (2005). WordNet and wordnets, Oxford : Elsevier.
- J. H. Moon, H. C. Cho & E. J. Park. (2020). Revisiting Round-Trip Translation for Quality Estimation. http://arxiv.org/abs/2004.13937