Browse > Article
http://dx.doi.org/10.13088/jiis.2022.28.4.251

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models  

Han, Minah (Graduate School of Business IT, Kookmin University)
Kim, Younha (Graduate School of Business IT, Kookmin University)
Kim, Namgyu (Graduate School of Business IT, Kookmin University)
Publication Information
Journal of Intelligence and Information Systems / v.28, no.4, 2022 , pp. 251-273 More about this Journal
Abstract
Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.
Keywords
Pre-Trained Language Model; Further Pre-Training; Domain-Specific Pre-Trained Language Model; Domain Specificity Index;
Citations & Related Records
Times Cited By KSCI : 4  (Citation Analysis)
연도 인용수 순위
1 Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
2 김동규, 박장원, 이동욱, 오성우, 권성준, 이인용, 최동원. (2022). KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용. 지능정보 연구, 28.2: 191-206.
3 유은지, 서수민, 김남규. (2021). 추가 사전학습기반 지식 전이를 통한 국가 R&D 전문 언어모델 구축. 지식경영연구, 22.3: 91-106.   DOI
4 Araci, D. (2019). Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063.
5 Chalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., & Androutsopoulos, I. (2020). LEGAL-BERT: The muppets straight out of law school. arXiv preprint arXiv:2010.02559.
6 Le, H., Vial, L., Frej, J, Segonne, V., Coavoux, M., Lecouteux, B., Allauzen, A., Crabbe, B., Besacier, L., & Schwab, D. (2019). Flaubert: Unsupervised language model pre-training for french. arXiv preprint arXiv:1912.05372.
7 이준범. (2020). Kcbert: 한국어 댓글로 학습한 BERT. 한국정보과학회언어공학연구회 2020년도 제32회 한글 및 한국어 정보처리 학술대회, 437-440.
8 Gururangan, S., Marasovic, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., & Smith, N. A. (2020). Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.
9 김윤하, 김남규. (2022). 오토인코더 기반 심층지도 네트워크를 활용한 계층형 데이터 분류 방법론. 지능정보연구, 25.3:185-207.
10 박현정, 신경식. (2020). BERT 를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발. 지능정보연구, 26.4: 1-25.   DOI
11 Munikar, M., Shakya, S., & Shrestha, A. (2019). Fine-grained sentiment classification using BERT. In 2019 Artificial Intelligence for Transforming Business and Society (AITB), 1, 1-5.
12 Lee, H., Yoon, J., Hwang, B., Joe, S., Min, S., & Gwon, Y. (2021, January). Korealbert: Pretraining a lite bert model for korean language understanding. In 2020 25th International Conference on Pattern Recognition (ICPR), 5551-5557.
13 Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Jianfeng Gao, J., & Poon, H. (2021). Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1), 1-23.
14 Beltagy, I., Lo, K., & Cohan, A. (2019). SciBERT: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676.
15 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner,C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
16 Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
17 Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., & Levy, O. (2020). Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8, 64-77.   DOI
18 Kaikhah, K. (2004). Automatic text summarization with neural networks. In 2004 2nd International IEEE Conference on'Intelligent Systems'. Proceedings (IEEE Cat. No. 04EX791), 1, 40-44.
19 Lee, C. H., Lee, Y. J., & Lee, D. H. (2020). A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development. Journal of Information Technology Services, 19(5), 83-91. https://doi.org/10.9716/KITS.2020.19.5.083.   DOI
20 임준호, 김현기. (2021). 사전학습 언어모델의 토큰 단위 문맥 표현을 이용한 한국어 의존 구문분석. 정보과학회논문지, 48.1: 27-34.
21 Miller, D. (2019). Leveraging BERT for extractive text summarization on lectures. arXiv preprint arXiv:1906.04165.
22 Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov,V., & Zettlemoyer, L. (2019). Bart: Denoising sequenceto-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
23 Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
24 Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26.
25 Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
26 Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J. (2020). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), 1234-1240.   DOI