References
- 김유영, 송민. (2016). 영화 리뷰 감성분석을 위한 텍스트 마이닝 기반 감성 분류기 구축. 지능정보연구, 22(3), 71-89. https://doi.org/10.13088/JIIS.2016.22.3.071
- 송민채, 신경식. (2018). 임베딩과 어텐션 매커니즘에 기반한 LSTM을 이용한 감성분석. 2018 한국지능정보시스템학회 춘계학술대회 논문집, 107-108.
- 유소연, 임규건. (2021). 텍스트 마이닝과 의미 네트워크 분석을 활용한 뉴스 의제 분석: 코로나 19 관련 감정을 중심으로. 지능정보연구, 27(1), 47-64. https://doi.org/10.13088/JIIS.2021.27.1.047
- Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17), 6000-6010.
- Tchalakova, M., Gerdemann, D., & Meurers, D. (2011). Automatic Sentiment Classification of Product Reviews Using Maximal Phrases Based Analysis. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis (WASSA 2.011), 111-117.
- Devlin, J., Chang, M.-W. Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171-4186.
- Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J. (2020). BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, Volume 36, Issue 4, 1234-1240. https://doi.org/10.1093/bioinformatics/btz682
- Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2227-2237.
- Gururangan, S., Marasovic, A., Swayamdipta, S., Lo, K., Beltagy, I., Downey, D., & Smith, N. A. (2020). Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8342-8360.
- Sachidananda, V., Kessler, J., & Lai, Y.-A. (2021). Efficient Domain Adaptation of Language Models via Adaptive Tokenization. In Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing, 155-165.
- Araci, D. (2019). FinBERT: Financial Sentiment Analysis with Pre-trained Language Models. arXiv.
- Demszky, D., Movshovitz-Attias, D., Ko, J., Cowen, A., Nemade, G., & Ravi, S. (2020). GoEmotions: A Dataset of Fine-Grained Emotions," In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4040-4054.
- Borgeaud, S., Mensch, A., Hoffmann, J., Cai, T., Rutherford, E., Millican, K., Driessche, G., Lespiau, J.-B., Damoc, B., Clark, A., Casas, D. L., Guy, A., Menick, J., Ring, R., Hennigan, T., Huang, S., Maggiore, L., Jones, C., Cassirer, A., Brock, A., Paganini, M., Irving, G., Vinyals, O., Osindero, S., Simonyan, K., Rae, J. W., Elsen, E., & Sifre, L. (2021). Improving language models by retrieving from trillions of tokens. arXiv.
- Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J., & Tang, J. (2021). KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation. Transactions of the Association for Computational Linguistics, 9:176-194. https://doi.org/10.1162/tacl_a_00360
- Park, S., Moon, J., Kim, S., Cho, W. I., Han, J., Pack, J., Song, C., Kim, J., Song, Y., Oh, T., Lee, J., Oh, J., Lyu, S., Jeong, Y., Lee, I., Seo, S., Lee, D., Kim, H., Lee, M., Jang, S., Do, S., Kim, S., Lim, K., Lee, J., Park, K., Shin, J., Kim, S., Park, L., Oh, A., Ha, J., & Cho, K. (2021). KLUE: Korean Language Understanding Evaluation. arXiv.
- Park, J. (2020). KoELECTRA: Pretrained ELECTRA Model for Korean. GitHub repository. https://github.com/monologg/KoELECTRA.
- Lim, S., Kim, M., & Lee, J. (2019). KorQuAD1.0: Korean QA Dataset for Machine Reading Comprehension. arXiv.
- Chalkidis, I., Fergadiotis, M., Malakasiotis, P., Aletras, N., & Androutsopoulos, I. (2020). LEGAL-BERT: The Muppets straight out of Law School. In Findings of the Association for Computational Linguistics: EMNLP, 2898-2904.
- Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I., (2019). Language Models are Unsupervised Multitask Learners. arXiv.
- Karamanolakis, G., Hsu, D., & Gravano, L. (2019). Leveraging Just a Few Keywords for Fine-Grained Aspect Detection Through Weakly Supervised Co-Training. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4611-4621.
- Yao, X., Zheng, Y., Yang, X., & Yang, Z. (2021). NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework. arXiv.
- Han, W. -B., & Kando, N. (2019). Opinion Mining with Deep Contextualized Embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, 35-42.
- Firdaus, M., Jain, U., Ekbal, A., & Bhattacharyy, P. (2021). SEPRG: Sentiment aware Emotion controlled Personalized Response Generation. In Proceedings of the 14th International Conference on Natural Language Generation, 353-363.
- Beltagy, I., Lo, K., & Cohan, A. (2019). SciBERT: A Pretrained Language Model for Scientific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3615-3620.
- Yin, P., Neubig, G., Yih, W., & Riedel, S. (2020). TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8413-8426.
- Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N., Presser, S., & Leahy, C. (2021). The Pile: An 800GB Dataset of Diverse Text for Language Modeling. arXiv.