Acknowledgement
이 연구는 2023년 정부(방위사업청)의 재원으로 국방과학연구소의 지원을 받아 수행된 미래도전국방기술 연구개발사업임(No.915026201)
References
- R. Grishman, "Information extraction," in IEEE Intelligent Systems, Vol.30, No.5, pp.8-15, 2015, doi: 10.1109/MIS. 2015.68.
- W. Xiang and B. Wang, "A survey of event extraction from text," in IEEE Access, Vol.7, pp.173111-173137, 2019, doi: 10.1109/ACCESS.2019.2956831.
- W. Liao and S. Veeramachaneni, "A simple semi-supervised algorithm for named entity recognition," In Proceedings of the NAACL HLT 2009 Workshop on Semi-Supervised Learning for Natural Language Processing, pp.58-65, 2009.
- D. Feng and H. Chen, "A small samples training framework for deep Learning-based automatic information extraction: Case study of construction accident news reports analysis," Advanced Engineering Informatics, Vol.47, pp.101256, 2021.
- Y. Chang et al., "A survey on evaluation of large language models," arXiv preprint arXiv:2307.03109, 2023.
- J. Wei et al., "Emergent abilities of large language models," arXiv preprint arXiv:2206.07682, 2022.
- M. T. R. Laskar, M. S. Bari, M. Rahman, M. A. H. Bhuiyan, S. Joty, and J. X. Huang, "A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets," arXiv preprint arXiv:2305.18486, 2023.
- J. Gou, B. Yu, S. J. Maybank, and D. Tao, "Knowledge distillation: A survey," International Journal of Computer Vision, Vol.129, No.6, pp.1789-1819, 2021.
- Y. Sari, M. F. Hassan, and N. Zamin, "Rule-based pattern extractor and named entity recognition: A hybrid approach," 2010 International Symposium on Information Technology, Kuala Lumpur, Malaysia, pp.563-568, 2010, doi: 10.1109/ITSIM.2010.5561392.
- C. Bizer et al., "DBpedia - A crystallization point for the web of data," Journal of Web Semantics: Science, Services and Agents on the WWW, Vol.7, No.3, pp.154-165, 2009. https://doi.org/10.1016/j.websem.2009.07.002
- Q. C. Bui, D. Campos, E. van Mulligen, and J. Kors, "A fast rule-based approach for biomedical event extraction," In Proceedings of the BioNLP Shared Task 2013 Workshop, pp.104-108, 2013.
- S. Rao, D. Marcu, K. Knight, and H. Daume III, "Biomedical event extraction using abstract meaning representation," In BioNLP 2017, pp.126-135, 2017.
- A. Vaswani et al., "Attention is all you need," Advances in Neural Information Processing Systems, Vol.30, 2017.
- J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
- S. Liu, Y. Chen, K. Liu, and J. Zhao, "Exploiting argument information to improve event detection via supervised attention mechanisms," In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol.1: Long Papers), pp.1789-1798, 2017.
- L. Zhao, L. Li, X. Zheng, and J. Zhang, "A BERT based Sentiment Analysis and Key Entity Detection Approach for Online Financial Texts," 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Dalian, China, pp.1233-1238, 2021, doi: 10.1109/CSCWD49262.2021.9437616.
- S. Park et al., "Klue: Korean language understanding evaluation," arXiv preprint arXiv:2105.09680, 2021.
- T. Brown et al., "Language models are few-shot learners," Advances in Neural Information Processing Systems, Vol.33, pp.1877-1901, 2020.
- J. Wei et al., "Chain-of-thought prompting elicits reasoning in large language models," Advances in Neural Information Processing Systems, Vol.35, pp.24824-24837, 2022.
- S. M. Xie, A. Raghunathan, P. Liang, and T. Ma, "An explanation of in-context learning as implicit bayesian inference," arXiv preprint arXiv:2111.02080, 2021.
- O. Sainz, H. Qiu, O. L. de Lacalle, E. Agirre, and B. Min, "ZS4IE: A toolkit for zero-shot information extraction with simple verbalizations," arXiv preprint arXiv:2203.13602, 2022.
- B. Sharma, Y. Gao, T. Miller, M. M. Churpek, M. Afshar, and D. Dligach, "Multi-Task Training with In-Domain Language Models for Diagnostic Reasoning," In Proceedings of the 5th Clinical Natural Language Processing Workshop, Toronto, Canada. Association for Computational Linguistics, pp.78-85, 2023.
- L. Ouyang et al., "Training language models to follow instructions with human feedback," Advances in Neural Information Processing Systems, Vol.35, pp.27730-27744, 2022.
- X. Wei et al., "Zero-shot information extraction via chatting with chatgpt," arXiv preprint arXiv:2302.10205, 2023.
- Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57.
- M. Popovic, "chrF: character n-gram F-score for automatic MT evaluation," In Proceedings of the tenth workshop on statistical machine translation, pp.392-395, 2015.