1 |
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
|
2 |
김동규, 박장원, 이동욱, 오성우, 권성준, 이인용, 최동원. (2022). KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용. 지능정보 연구, 28.2: 191-206.
|
3 |
유은지, 서수민, 김남규. (2021). 추가 사전학습기반 지식 전이를 통한 국가 R&D 전문 언어모델 구축. 지식경영연구, 22.3: 91-106.
DOI
|
4 |
Araci, D. (2019). Finbert: Financial sentiment analysis with pre-trained language models. arXiv preprint arXiv:1908.10063.
|
5 |
Chalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., & Androutsopoulos, I. (2020). LEGAL-BERT: The muppets straight out of law school. arXiv preprint arXiv:2010.02559.
|
6 |
Le, H., Vial, L., Frej, J, Segonne, V., Coavoux, M., Lecouteux, B., Allauzen, A., Crabbe, B., Besacier, L., & Schwab, D. (2019). Flaubert: Unsupervised language model pre-training for french. arXiv preprint arXiv:1912.05372.
|
7 |
이준범. (2020). Kcbert: 한국어 댓글로 학습한 BERT. 한국정보과학회언어공학연구회 2020년도 제32회 한글 및 한국어 정보처리 학술대회, 437-440.
|
8 |
Gururangan, S., Marasovic, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., & Smith, N. A. (2020). Don't stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.
|
9 |
김윤하, 김남규. (2022). 오토인코더 기반 심층지도 네트워크를 활용한 계층형 데이터 분류 방법론. 지능정보연구, 25.3:185-207.
|
10 |
박현정, 신경식. (2020). BERT 를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발. 지능정보연구, 26.4: 1-25.
DOI
|
11 |
Munikar, M., Shakya, S., & Shrestha, A. (2019). Fine-grained sentiment classification using BERT. In 2019 Artificial Intelligence for Transforming Business and Society (AITB), 1, 1-5.
|
12 |
Lee, H., Yoon, J., Hwang, B., Joe, S., Min, S., & Gwon, Y. (2021, January). Korealbert: Pretraining a lite bert model for korean language understanding. In 2020 25th International Conference on Pattern Recognition (ICPR), 5551-5557.
|
13 |
Gu, Y., Tinn, R., Cheng, H., Lucas, M., Usuyama, N., Liu, X., Naumann, T., Jianfeng Gao, J., & Poon, H. (2021). Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1), 1-23.
|
14 |
Beltagy, I., Lo, K., & Cohan, A. (2019). SciBERT: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676.
|
15 |
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner,C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
|
16 |
Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
17 |
Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., & Levy, O. (2020). Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8, 64-77.
DOI
|
18 |
Kaikhah, K. (2004). Automatic text summarization with neural networks. In 2004 2nd International IEEE Conference on'Intelligent Systems'. Proceedings (IEEE Cat. No. 04EX791), 1, 40-44.
|
19 |
Lee, C. H., Lee, Y. J., & Lee, D. H. (2020). A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development. Journal of Information Technology Services, 19(5), 83-91. https://doi.org/10.9716/KITS.2020.19.5.083.
DOI
|
20 |
임준호, 김현기. (2021). 사전학습 언어모델의 토큰 단위 문맥 표현을 이용한 한국어 의존 구문분석. 정보과학회논문지, 48.1: 27-34.
|
21 |
Miller, D. (2019). Leveraging BERT for extractive text summarization on lectures. arXiv preprint arXiv:1906.04165.
|
22 |
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov,V., & Zettlemoyer, L. (2019). Bart: Denoising sequenceto-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
|
23 |
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
24 |
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26.
|
25 |
Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
|
26 |
Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J. (2020). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4), 1234-1240.
DOI
|