DOI QR코드

DOI QR Code

A Comparative Study on Discrimination Issues in Large Language Models

거대언어모델의 차별문제 비교 연구

  • Wei Li (Department of Management, Kyung Hee University) ;
  • Kyunghwa Hwang (Department of Bigdata Analytics, Kyung Hee University) ;
  • Jiae Choi (Department of Global Culture & Management, Calvin University) ;
  • Ohbyung Kwon (Graduate School of AI, Kyung Hee University)
  • 이위 (경희대학교 경영학과) ;
  • 황경화 (경희대학교 빅데이터응용학과) ;
  • 최지애 (칼빈대학교 글로벌문화경영학과) ;
  • 권오병 (경희대학교 AI대학원)
  • Received : 2023.06.27
  • Accepted : 2023.08.23
  • Published : 2023.09.30

Abstract

Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.

최근 ChatGPT 등 거대언어모델(Large Language Models)의 활용은 대화형상거래, 모바일금융 서비스 등 다양한 분야에서 사용이 증가하고 있다. 그러나 주로 기존 문서를 학습하여 만들어진 거대언어모델은 문서에 내재된 인간의 다양한 편향까지도 학습할 수 있다. 그럼에도 불구하고 거대언어모델에 편향과 차별의 양상에 대한 비교연구는 거의 이루어지지 않았다. 이에 본 연구의 목적은 거대언어모델안에 9가지 차별(Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation)의 존재유무 또는 그 정도를 점검하고 발전 방안을 제안하는 것이다. 이를 위해 차별 양상을 특정하기 위한 도구인 BBQ (Bias Benchmark for QA)를 활용하여 ChatGPT, GPT-3, Bing Chat 등 세가지 거대언어모델을 대상으로 비교하였다. 평가 결과 거대언어모델에 적지 않은 차별적 답변이 관찰되었으며, 그 양상은 거대언어모델에 따라 차이가 있었다. 특히 성차별, 인종차별, 경제적 불평등 등 전통적인 인공지능 윤리 이슈가 아닌 노인차별, 장애인차별에서 문제점이 노출되어, 인공지능 윤리의 새로운 관점을 찾을 수 있었다. 비교 결과를 기반으로 추후 거대언어모델의 보완 및 발전 방안에 대해 기술하였다.

Keywords

Acknowledgement

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155911, Artificial Intelligence Convergence Innovation Human Resources Development (Kyung Hee University))

References

  1. 김세형, 윤태영, 강주영. (2023). BERTopic과 소셜 네트워크 분석 기반 고령화 단계별 판례분석을 통한 분쟁 유형 도출에 관한 연구. 한국전자거래학회지, 28(1), 123-144.
  2. 이준식, 박도형. Are you a Machine or Human?: 소셜 로봇의 인간 유사성과 소비자 해석 수준이 의인화에 미치는 영향. 지능정보 연구, 2021, 27(1): 129-149. https://doi.org/10.13088/JIIS.2021.27.1.129
  3. 이정선, 서보밀, 권영옥. 인공지능이 의사결정에 미치는 영향에 관한 연구: 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로. 지능정보연구, 2021, 27(3): 231-252. https://doi.org/10.13088/JIIS.2021.27.3.231
  4. 김회정, 권오병. (2023). 뉴스 기사 감성분석을 활용한 고령층 연령차별 양상 및 COVID-19 조절효과 연구. 한국전자거래학회지, 28(1), 55-76. 
  5. Behnia, R., Ebrahimi, M. R., Pacheco, J., & Padmanabhan, B. (2022, November). EW-Tune: A Framework for Privately Fine-Tuning Large Language Models with Differential Privacy. In 2022 IEEE International Conference on Data Mining Workshops (ICDMW) (pp. 560-566). IEEE.
  6. Belk, R. (2021). Ethical issues in service robotics and artificial intelligence. The Service Industries Journal, 41(13-14), 860-876 https://doi.org/10.1080/02642069.2020.1727892
  7. Bellegarda, J. R. (2004). Statistical language model adaptation: review and perspectives. Speech Communication, 42(1), 93-108. https://doi.org/10.1016/j.specom.2003.08.002
  8. Breidbach, C. F., & Maglio, P. (2020). Accountable algorithms? The ethical implications of data-driven business models. Journal of Service Management, 31(2), 163-185. https://doi.org/10.1108/JOSM-03-2019-0073
  9. Chernyaeva, O., & Hong, T. (2022). The Detection of Online Manipulated Reviews Using Machine Learning and GPT-3. 지능정보연구, 28(4), 347-364. https://doi.org/10.13088/JIIS.2022.28.4.347
  10. Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data-evolution, challenges and research agenda. International Journal of Information Management, 48, 63-71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021
  11. Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). "So what if ChatGPT wrote it?" Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
  12. Guan, C., Wang, X., Zhang, Q., Chen, R., He, D., & Xie, X. (2019, May). Towards a deep and unified understanding of deep neural models in nlp. In International conference on machine learning (pp. 2454-2463). PMLR.
  13. Hall, P., & Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review.
  14. Khlaaf, H., Mishkin, P., Achiam, J., Krueger, G., & Brundage, M. (2022). A hazard analysis framework for code synthesis large language models. arXiv preprint arXiv:2207.14157.
  15. Khowaja, S. A., Khuwaja, P., & Dev, K. (2023). ChatGPT Needs SPADE (Sustainability, PrivAcy, Digital divide, and Ethics) Evaluation: A Review. arXiv preprint arXiv:2305.03123.
  16. Kushwaha, A. K., & Kar, A. K. (2021). MarkBot-a language model-driven chatbot for interactive marketing in post-modern world. Information Systems Frontiers, 1-18.
  17. Li, H., Moon, J. T., Purkayastha, S., Celi, L. A., Trivedi, H., & Gichoya, J. W. (2023). Ethics of large language models in medicine and medical research. The Lancet Digital Health.
  18. Li, Y., Tan, Z., & Liu, Y. (2023). Privacy-Preserving Prompt Tuning for Large Language Model Services. arXiv preprint arXiv:2305.06212.
  19. Livingston, M. (2020). Preventing racial bias in federal ai. Journal of Science Policy and Governance, 16.
  20. Lokman, A. S., & Ameedeen, M. A. (2019). Modern chatbot systems: A technical review. In Proceedings of the Future Technologies Conference (FTC) 2018: Volume 2 (pp. 1012-1023). Springer International Publishing.
  21. Malik, T., Dwivedi, Y., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., ... & Wright, R. (2023). "So what if ChatGPT wrote it?" Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642.
  22. Marcus, G., & Davis, E. (2023). Large language models like ChatGPT say the darnedest things. The Road to AI We Can Trust.
  23. McKinsey & Company (2023). What is generative AI?
  24. Melis, G., Dyer, C., & Blunsom, P. (2017). On the state of the art of evaluation in neural language models. arXiv preprint arXiv:1707.05589.
  25. Oppy, G., & Dowe, D. (2021). The Turing Test. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Winter Edition.
  26. Parrish, Alicia et al. "BBQ: A hand-built bias benchmark for question answering." Findings (2021).
  27. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
  28. Rotaru, V., Huang, Y., Li, T., Evans, J., & Chattopadhyay, I. (2022). Event-level prediction of urban crime reveals a signature of enforcement bias in US cities. Nature Human Behaviour, 6(8), 1056-1068. https://doi.org/10.1038/s41562-022-01372-0
  29. Rozado, D. (2023). Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems. Manhattan Institute.
  30. Rudinger, R., Naradowsky, J., Leonard, B. and Van Durme, B. (2018). Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14, New Orleans, Louisiana. Associationfor Computational Linguistics.
  31. Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 342-363 https://doi.org/10.37074/jalt.2023.6.1.9
  32. Ruiz, E., & Sedeno, E. (2023). Gender Bias in Artificial Intelligence. In Gender in AI and Robotics: The Gender Challenges from an Interdisciplinary Perspective (pp. 61-75). Cham: Springer International Publishing.
  33. Sheng, E., Chang, K., Natarajan, P. and Pengm N. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407-3412, Hong Kong, China. Association for Computational Linguistics.
  34. Song, Z., Mellon, G., & Shen, Z. (2020). Relationship between racial bias exposure, financial literacy, and entrepreneurial intention: An empirical investigation. Journal of Artificial Intelligence and Machine Learning in Management, 4(1), 42-55.
  35. Turing, A. M. (1950). Computing machinery and intelligence. The Essential Turing: The Ideas That Gave Birth to the Computer Age, 433-464.
  36. Tyagi, N., & Bhushan, B. (2023). Demystifying the role of natural language processing (NLP) in smart city applications: Background, motivation, recent advances, and future research directions. Wireless Personal Communications, 130(2), 857-908. https://doi.org/10.1007/s11277-023-10312-8
  37. Varaprasad, R., & Mahalaxmi, G. (2022). Applications and techniques of natural language processing: An overview. IUP Journal of Computer Sciences, 16(3), 7-21.
  38. Varsha, P. S. (2023). How can we manage biases in artificial intelligence systems-A systematic literature review. International Journal of Information Management Data Insights, 3(1), 100165.
  39. Walkowiak, E. (2023). Digitalization and inclusiveness of HRM practices: The example of neurodiversity initiatives. Human Resource Management Journal.
  40. Wirtz, J., & Zeithaml, V. (2018). Cost-effective service excellence. Journal of the Academy of Marketing Science, 46(1), 59-80. https://doi.org/10.1007/s11747-017-0560-7