DOI QR코드

DOI QR Code

Is Mr. AI more responsible? The effect of anthropomorphism in the moral judgement toward AI's decision making

AI의 의사결정에 대한 도덕판단에서 의인화가 미치는 영향 - 쌍 도덕 이론을 중심으로 -

  • Yoon-Bin, Choi (Interdisciplinary Program in Cognitive Science, Seoul National University) ;
  • Dayk, Jang (Gachon Startup College, Gachon University)
  • 최윤빈 (서울대학교 협동과정 인지과학전공 ) ;
  • 장대익 (가천대학교 창업대학)
  • Received : 2022.09.14
  • Accepted : 2022.09.23
  • Published : 2022.12.31

Abstract

As artificial intelligence (AI) technology advances, the number of cases in which AI becomes an object or subject of moral judgment is increasing, and this trend is expected to accelerate. Although the area of AI in human society expands, relatively few studies have been conducted on how people perceive and respond to AI. Three studies examined the effect of the anthropomorphism of AI on its responsibility. We predicted that anthropomorphism would increase the responsibility perception, and perceived agency and perceived patiency for AI would mediate this effect. Although the manipulation was not effective, multiple analyses confirmed the indirect effect of perceived patiency. In contrast, the effect of perceived agency of AI was somewhat mixed, which makes the hypothesis partially supported by the overall result. This result shows that for the moral status of artificial agents, perceived patiency is relatively more critical than perceived agency. These results support the organic perspective on the moral status that argues the importance of patiency, and show that patiency is more important than agency in the anthropomorphism related study of AI and robots.

인공지능 기술이 고도화됨에 따라 인공지능이 도덕적 판단의 대상이 되거나 주체가 되는 사례가 늘어가고 있으며, 이러한 추세는 가속화될 전망이다. 인공지능은 고용, 의료 등 인간 사회의 핵심적인 분야에서 활발히 활용되기 시작했지만, 그에 반해 사람들이 인공지능과의 상호작용에서 그들을 어떠한 방식으로 지각하고 반응하는지에 관한 연구는 상대적으로 많지 않다. 본 연구는 세 가지 맥락(고용, 의료, 법조)에서의 실험을 통해 인공지능의 의인화가 인공지능의 의사결정에 대한 도덕적 책임 판단에 미치는 영향과 그 과정을 살펴보았다. 쌍 도덕 이론의 주요 변인인 지각된 행위 능력과 지각된 경험 능력을 매개 변인으로 모델을 구성해 검증하였으며, 구체적으로는 지각된 의인화가 인공지능의 도덕적 책임을 증가시키고, 인공지능에 대해 지각된 행위 능력과 경험 능력이 이를 매개할 것이라 예측하였다. 연구 결과, 실험 조작은 유효하지 않았으나 모든 실험에서 지각된 경험 능력이 의인화와 도덕적 책임 지각 간의 관계를 매개함을 확인하였다. 반면 지각된 행위 능력의 효과는 혼재된 결과를 보여 가설을 부분적으로 지지하였다. 이는 도덕적 지위에 대한 경험 능력의 중요성을 주장하는 유기체적 관점을 지지하는 결과이며, 또한 AI와 로봇의 의인화 연구에서 경험 능력이 행위 능력보다 더욱 중요함을 보이는 것이다.

Keywords

References

  1. 권헌영 (2019). 인공지능(AI)과 법조 분야: 윤리적․규제적 고려사항. 경제규제와 법, 12(2), 69-80.
  2. Adler-Milstein, J., Holmgren, A. J., Kralovec, P., Worzala, C., Searcy, T., & Patel, V. (2017). Electronic health record adoption in US hospitals: the emergence of a digital "advanced use" divide. Journal of the American Medical Informatics Association: JAMIA, 24(6), 1142-1148. https://doi.org/10.1093/jamia/ocx080
  3. Ajunwa, I., Friedler, S., Scheidegger, C. E., & Venkatasubramanian, S. (2016). Hiring by algorithm: predicting and preventing disparate impact. Available at SSRN.
  4. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, May 23, 2016.
  5. Artificial intelligence: Go master Lee Se-dol wins against AlphaGo program (2016, March 13). BBC News Online. https://www.bbc.com/news/technology-35797102.
  6. Asaro, P. M. (2011). 11 A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics. Robot ethics: The ethical and social implications of robotics, 169.
  7. Ayasdi (2018). Ayasdi for Payers: white paper. Ayasdi. https://s3.amazonaws.com/cdn.ayasdi.com/wp-content/uploads/2018/10/05102657/WP-Ayasdi-for-Payers.pdf
  8. Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34. https://doi.org/10.1016/j.cognition.2018.08.003
  9. Cantarero, K., Szarota, P., Stamkou, E., Navas, M., & Dominguez Espinosa, A. D. C. (2021). The effects of culture and moral foundations on moral judgments: The ethics of authority mediates the relationship between power distance and attitude towards lying to one's supervisor. Current Psychology, 40(2), 675-683.
  10. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017, August). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining (pp. 797-806).
  11. Curry, O. S., Chesters, M. J., & Van Lissa, C. J. (2019). Mapping morality with a compass: Testing the theory of 'morality-as-cooperation'with a new questionnaire. Journal of Research in Personality, 78, 106-124. https://doi.org/10.1016/j.jrp.2018.10.008
  12. Dash, S., Shakyawar, S. K., Sharma, M., & Kaushik, S. (2019). Big data in healthcare: management, analysis and future prospects. Journal of Big Data, 6(1), 1-25. https://doi.org/10.1186/s40537-018-0162-3
  13. Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62. https://doi.org/10.1145/2844110
  14. Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114. https://doi.org/10.1037/xge0000033
  15. Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological review, 114(4), 864. https://doi.org/10.1037/0033-295X.114.4.864
  16. Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford review, 5.
  17. Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology (Vol. 47, pp. 55-130). Academic Press.
  18. Graham, J., Haidt, J., Motyl, M., Meindl, P., Iskiwitch, C., & Mooijman, M. (2018). Moral foundations theory: On the advantages of moral pluralism over moral monism. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 211-222). The Guilford Press.
  19. Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology, 96(5), 1029. https://doi.org/10.1037/a0015141
  20. Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. science, 315(5812), 619-619. https://doi.org/10.1126/science.1134475
  21. Gray, K., Jenkins, A. C., Heberlein, A. S., & Wegner, D. M. (2011). Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences, 108(2), 477-479. https://doi.org/10.1073/pnas.1015493108
  22. Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130. https://doi.org/10.1016/j.cognition.2012.06.007
  23. Gray, K., & Wegner, D. M. (2012). Morality takes two: Dyadic morality and mind perception.
  24. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108. https://doi.org/10.1126/science.1062872
  25. Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. mit Press.
  26. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog?. Journal of personality and social psychology, 65(4), 613. https://doi.org/10.1037/0022-3514.65.4.613
  27. Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review, 108(4), 814. https://doi.org/10.1037/0033-295X.108.4.814
  28. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.
  29. HLEG, A. I. (2019). High-level expert group on artificial intelligence: Ethics guidelines for trustworthy AI. European Commission, 09.04.
  30. Hollister, B., & Bonham, V. L. (2018). Should electronic health record-derived social and behavioral data be used in precision medicine research?. AMA journal of ethics, 20(9), 873-880.
  31. Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. Handbook of socialization theory and research, 347, 480.
  32. Kohlberg, L. (2016). 1. Stages of moral development as a basis for moral education. In C. Beck, B. Crittenden & E. Sullivan (Ed.), Moral Education (pp. 23-92). Toronto: University of Toronto Press. https://doi.org/10.3138/9781442656758-004
  33. Kuncel, N. R., Klieger, D. M., & Ones, D. S. (2014). In hiring, algorithms beat instinct. Harvard business review, 92(5), p32-32.
  34. Laakasuo, M., Palomaki, J., & Kobis, N. (2021). Moral uncanny valley: a robot's appearance moderates  how its decisions are judged. International Journal of Social Robotics, 1-10.
  35. Larsen, R. R. (2020). Psychopathy as moral blindness: a qualifying exploration of the blindness-analogy in psychopathy theory and research. Philosophical Explorations, 23(3), 214-233. https://doi.org/10.1080/13869795.2020.1799662
  36. Lee, D. (2016, March 25). Tay: Microsoft issues apology over racist chatbot fiasco. BBC News Online. https://www.bbc.com/news/technology-35902104
  37. Li, M., & Suh, A. (2021, January). Machinelike or Humanlike? A Literature Review of Anthropomorphism in AI-Enabled Technology. In Proceedings of the 54th Hawaii International Conference on System Sciences (p. 4053).
  38. MacDorman, K. F. (2005, July). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 workshop: toward social mechanisms of android science (Vol. 106118).
  39. MacDorman, K. F., & Entezari, S. O. (2015). Individual differences predict sensitivity to the uncanny valley. Interaction Studies, 16(2), 141-172. https://doi.org/10.1075/is.16.2.01mac
  40. MacDorman, K. F., Green, R. D., Ho, C. C., & Koch, C. T. (2009). Too real for comfort? Uncanny responses to computer generated faces. Computers in human behavior, 25(3), 695-710. https://doi.org/10.1016/j.chb.2008.12.026
  41. Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015, March). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 117-124). IEEE.
  42. Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 125-132). IEEE.
  43. Min, J., Kim, S., Park, Y., & Sohn, Y. W. (2018). A Comparative Study of Potential Job Candidates' Perceptions of an AI Recruiter and a Human Recruiter. Journal of the Korea Convergence Society, 9(5), 191-202. https://doi.org/10.15207/JKCS.2018.9.5.191
  44. Moosa, M. M., & Ud-Dean, S. M. (2010). Danger avoidance: An evolutionary explanation of uncanny valley. Biological Theory, 5(1), 12-14. https://doi.org/10.1162/BIOT_a_00016
  45. Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy, 7, 33-35.
  46. Morse, S. J. (2008). Psychopathy and criminal responsibility. Neuroethics, 1(3), 205-212. https://doi.org/10.1007/s12152-008-9021-9
  47. Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.
  48. Natarajan, M., & Gombolay, M. (2020, March). Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 33-42).
  49. Newborn, M. (2012). Kasparov versus Deep Blue: Computer chess comes of age. Springer Science & Business Media.
  50. O'neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
  51. Oxford Dictionary. (n.d.). artificial intelligence. In Oxford English Dictionary. Retrieved October 28, 2021, from https://www.oed.com/viewdictionaryentry/Entry/271625
  52. Otting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27-39. https://doi.org/10.1016/j.chb.2018.07.022
  53. Savage, M. (2019, March 19). Meet Tengai, the job interview robot who won't judge you. BBC News Online. https://www.bbc.com/news/business-47442953
  54. Schein, C., & Gray, K. (2018). The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review, 22(1), 32-70. https://doi.org/10.1177/1088868317698288
  55. Schein, C., Ritter, R. S., & Gray, K. (2016). Harm mediates the disgust-immorality link. Emotion, 16(6), 862. https://doi.org/10.1037/emo0000167
  56. Tollon, F. (2021). The artificial view: toward a non-anthropocentric account of moral patiency. Ethics and Information Technology, 23(2), 147-155. https://doi.org/10.1007/s10676-020-09540-4
  57. Torrance, S. (2006). The ethical status of artificial agents-with and without consciousness. Ethics of human interaction with robotic, bionic and AI systems: concepts and policies. Istituto Italiano per gli Studi Filosofici, Napoli, 60-66.
  58. Torrance, S. (2008). Ethics and consciousness in artificial agents. Ai & Society, 22(4), 495-521. https://doi.org/10.1007/s00146-007-0091-8
  59. Verma, N., & Dombrowski, L. (2018, April). Confronting social criticisms: Challenges when adopting data-driven policing strategies. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).
  60. Wang, W. (2017). Smartphones as social actors? Social dispositional factors in assessing anthropomorphism. Computers in Human Behavior, 68, 334-344. https://doi.org/10.1016/j.chb.2016.11.022
  61. Wang, R., Harper, F. M., & Zhu, H. (2020, April). Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
  62. Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232. https://doi.org/10.1177/1745691610369336
  63. Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117. https://doi.org/10.1016/j.jesp.2014.01.005
  64. Wegner, D. M., & Gray, K. (2017). The mind club: Who thinks, what feels, and why it matters. Penguin.
  65. Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer-and forgive-service robots with perceived feelings. Journal of Applied Psychology.