Acknowledgement
This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155911, Artificial Intelligence Convergence Innovation Human Resources Development (Kyung Hee University))
References
- 김세형, 윤태영, 강주영. (2023). BERTopic과 소셜 네트워크 분석 기반 고령화 단계별 판례분석을 통한 분쟁 유형 도출에 관한 연구. 한국전자거래학회지, 28(1), 123-144.
- 이준식, 박도형. Are you a Machine or Human?: 소셜 로봇의 인간 유사성과 소비자 해석 수준이 의인화에 미치는 영향. 지능정보 연구, 2021, 27(1): 129-149. https://doi.org/10.13088/JIIS.2021.27.1.129
- 이정선, 서보밀, 권영옥. 인공지능이 의사결정에 미치는 영향에 관한 연구: 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로. 지능정보연구, 2021, 27(3): 231-252. https://doi.org/10.13088/JIIS.2021.27.3.231
- 김회정, 권오병. (2023). 뉴스 기사 감성분석을 활용한 고령층 연령차별 양상 및 COVID-19 조절효과 연구. 한국전자거래학회지, 28(1), 55-76.
- Behnia, R., Ebrahimi, M. R., Pacheco, J., & Padmanabhan, B. (2022, November). EW-Tune: A Framework for Privately Fine-Tuning Large Language Models with Differential Privacy. In 2022 IEEE International Conference on Data Mining Workshops (ICDMW) (pp. 560-566). IEEE.
- Belk, R. (2021). Ethical issues in service robotics and artificial intelligence. The Service Industries Journal, 41(13-14), 860-876 https://doi.org/10.1080/02642069.2020.1727892
- Bellegarda, J. R. (2004). Statistical language model adaptation: review and perspectives. Speech Communication, 42(1), 93-108. https://doi.org/10.1016/j.specom.2003.08.002
- Breidbach, C. F., & Maglio, P. (2020). Accountable algorithms? The ethical implications of data-driven business models. Journal of Service Management, 31(2), 163-185. https://doi.org/10.1108/JOSM-03-2019-0073
- Chernyaeva, O., & Hong, T. (2022). The Detection of Online Manipulated Reviews Using Machine Learning and GPT-3. 지능정보연구, 28(4), 347-364. https://doi.org/10.13088/JIIS.2022.28.4.347
- Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data-evolution, challenges and research agenda. International Journal of Information Management, 48, 63-71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021
- Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). "So what if ChatGPT wrote it?" Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
- Guan, C., Wang, X., Zhang, Q., Chen, R., He, D., & Xie, X. (2019, May). Towards a deep and unified understanding of deep neural models in nlp. In International conference on machine learning (pp. 2454-2463). PMLR.
- Hall, P., & Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review.
- Khlaaf, H., Mishkin, P., Achiam, J., Krueger, G., & Brundage, M. (2022). A hazard analysis framework for code synthesis large language models. arXiv preprint arXiv:2207.14157.
- Khowaja, S. A., Khuwaja, P., & Dev, K. (2023). ChatGPT Needs SPADE (Sustainability, PrivAcy, Digital divide, and Ethics) Evaluation: A Review. arXiv preprint arXiv:2305.03123.
- Kushwaha, A. K., & Kar, A. K. (2021). MarkBot-a language model-driven chatbot for interactive marketing in post-modern world. Information Systems Frontiers, 1-18.
- Li, H., Moon, J. T., Purkayastha, S., Celi, L. A., Trivedi, H., & Gichoya, J. W. (2023). Ethics of large language models in medicine and medical research. The Lancet Digital Health.
- Li, Y., Tan, Z., & Liu, Y. (2023). Privacy-Preserving Prompt Tuning for Large Language Model Services. arXiv preprint arXiv:2305.06212.
- Livingston, M. (2020). Preventing racial bias in federal ai. Journal of Science Policy and Governance, 16.
- Lokman, A. S., & Ameedeen, M. A. (2019). Modern chatbot systems: A technical review. In Proceedings of the Future Technologies Conference (FTC) 2018: Volume 2 (pp. 1012-1023). Springer International Publishing.
- Malik, T., Dwivedi, Y., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., ... & Wright, R. (2023). "So what if ChatGPT wrote it?" Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
- Marcus, G., & Davis, E. (2023). Large language models like ChatGPT say the darnedest things. The Road to AI We Can Trust.
- McKinsey & Company (2023). What is generative AI?
- Melis, G., Dyer, C., & Blunsom, P. (2017). On the state of the art of evaluation in neural language models. arXiv preprint arXiv:1707.05589.
- Oppy, G., & Dowe, D. (2021). The Turing Test. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Winter Edition.
- Parrish, Alicia et al. "BBQ: A hand-built bias benchmark for question answering." Findings (2021).
- Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
- Rotaru, V., Huang, Y., Li, T., Evans, J., & Chattopadhyay, I. (2022). Event-level prediction of urban crime reveals a signature of enforcement bias in US cities. Nature Human Behaviour, 6(8), 1056-1068. https://doi.org/10.1038/s41562-022-01372-0
- Rozado, D. (2023). Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems. Manhattan Institute.
- Rudinger, R., Naradowsky, J., Leonard, B. and Van Durme, B. (2018). Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Associationfor Computational Linguistics.
- Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 342-363 https://doi.org/10.37074/jalt.2023.6.1.9
- Ruiz, E., & Sedeno, E. (2023). Gender Bias in Artificial Intelligence. In Gender in AI and Robotics: The Gender Challenges from an Interdisciplinary Perspective (pp. 61-75). Cham: Springer International Publishing.
- Sheng, E., Chang, K., Natarajan, P. and Pengm N. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407-3412, Hong Kong, China. Association for Computational Linguistics.
- Song, Z., Mellon, G., & Shen, Z. (2020). Relationship between racial bias exposure, financial literacy, and entrepreneurial intention: An empirical investigation. Journal of Artificial Intelligence and Machine Learning in Management, 4(1), 42-55.
- Turing, A. M. (1950). Computing machinery and intelligence. The Essential Turing: The Ideas That Gave Birth to the Computer Age, 433-464.
- Tyagi, N., & Bhushan, B. (2023). Demystifying the role of natural language processing (NLP) in smart city applications: Background, motivation, recent advances, and future research directions. Wireless Personal Communications, 130(2), 857-908. https://doi.org/10.1007/s11277-023-10312-8
- Varaprasad, R., & Mahalaxmi, G. (2022). Applications and techniques of natural language processing: An overview. IUP Journal of Computer Sciences, 16(3), 7-21.
- Varsha, P. S. (2023). How can we manage biases in artificial intelligence systems-A systematic literature review. International Journal of Information Management Data Insights, 3(1), 100165.
- Walkowiak, E. (2023). Digitalization and inclusiveness of HRM practices: The example of neurodiversity initiatives. Human Resource Management Journal.
- Wirtz, J., & Zeithaml, V. (2018). Cost-effective service excellence. Journal of the Academy of Marketing Science, 46(1), 59-80. https://doi.org/10.1007/s11747-017-0560-7