References
- Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G. and Askell, A. (2020). Language Models are Few-shot Learners, Advances in Neural Information Processing Systems, 33(1), 1877-1901.
- Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K. and Xie, X. (2024). A Survey on Evaluation of Large Language Models, ACM Transactions on Intelligent Systems and Technology, 15(3), 1-45.
- Chen, Z., Mao, H., Li, H., Jin, W., Wen, H., Wei, X. and Tang, J. (2024). Exploring the Potential of Large Language Models in Learning on Graphs, ACM SIGKDD Explorations Newsletter, 25(2), 42-61.
- Han, M. A., Kim, Y. H. and Kim, N. G. (2022). The Effect of Domain Specificity on the Performance of Domain-specific Pre-trained Language Models, Journal of Intelligence and Information Systems, 28(4), 251-273.
- Han, N. E., Seo, S. and Um, J. H. (2023). A Proposal of Evaluation of Large Language Models Built Based on Research Data, Journal of the Korean Society for Information Management, 40(3), 77-98.
- Han, X., Zhang, Z. Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A. and Zhang, L. (2021). Pre-trained Models: Past, Present and Future, AI Open, 2(1), 225-250.
- Heng, J., Teo, D. B. and Tan, L. F. (2023). The Impact of Chat Generative Pre-trained Transformer(ChatGPT) on Medical Education, Postgraduate Medical Journal, 99(1176), 1125-1127.
- Heo, H. D., Kang, D. G., Kim, Y. S. and Chun, S. H. (2024). A Study on the Intelligent Document Processing Platform for Document Data Informatization, The Journal of The Institute of Internet, Broadcasting and Communication, 24(1), 89-95.
- Jung, J. K., Choi, S. K. and Kwon, H. C. (2023). Combining sLLM and Re-ranking Strategies for an Efficient GEC Model, Korean Institute of Information Scientists and Eng, 2023(12), 362-364.
- Kim, A. Muhn, M. and Nikolaev, V. (2023). Bloated Disclosures: Can ChatGPT Help Investors Process Financial Information?, General Economics, 13, 1-20.
- Kim, J. S. (2023). A Study on Fine-Tuning and Transfer Learning to Create a Sentiment Binary Classification Model in Korean Text, Journal of Korea Society of Industrial Information Systems, 27(5), 1-11.
- Kim, S., Shin, J., Yun, H. G., Lee, J., Choi, J. and Han, J. (2023). Technology Trends of Large Language Models in the Age of Generative AI, Korean Institute of Information Scientists and Engineer, 41(11), 25-33.
- Lee, C. H., Lee, Y. J. and Lee, D. H. (2020). A Study of Fine Tuning Pre-trained Korean BERT for Question Answering Performance Development, Journal of Information Technology Services, 19(5), 83-91, 2020.
- Lee, H. (2023). Innovations and Risk Factors of Generative AI in the Financial Industry, Global Financial Review, 4(1), 91-121.
- Nah, F. H., Zheng, F., Cai, R., Siau, J. K. and Chen, L. (2022). Generative AI and ChatGPT: Applications, Challenges, and AI-Human Collaboration, Journal of Information Technology Case and Application Research, 25(3), 277-304.
- Park, S. and Kang, J. (2023). Analysis of Prompt Engineering Methodologies and Research Status to Improve Inference Capability of ChatGPT and Other Large Language Models, Journal of Intelligence and Information Systems, 29(4), 287-308.
- Park, N. and Lee, M. (2023). Empowering Emotion Classification Performance through Reasoning Dataset from Large-scale Language Model, Korean Computer and Information Society Academic Conference Papers, 31(2), 59-61.
- Seo, B., Lee, Y. H. and Cho, H. (2022). Creation and Use of the News Sentiment Index (NSI) using Machine Learning, National Accounts Review, 1, 1-15.
- Noh, S. (2024). Development of Large-Scale Language Models and Ways to Utilize Them in Financial Information Analysis, Capital Market Research Institute Issue Report, 19, 1-24.
- Yu, Y. and Kim, H. (2023). Development of a Regulatory Q&A System for KAERI Utilizing Document Search Algorithms and Large Language Model, Journal of Korea Society of Industrial Information Systems, 28(5), 31-39.