DOI QR코드

DOI QR Code

Uncanny Valley: Relationships Between Anthropomorphic Attribution to Robots, Mind Perception, and Moral Care

불쾌한 골짜기: 로봇 속성의 의인화, 마음지각 및 도덕적 처우의 관계

  • 신홍임 (순천대학교 교양교육원 자유전공학부)
  • Received : 2021.07.25
  • Accepted : 2021.09.23
  • Published : 2021.12.31

Abstract

The attribution of human traits, emotions, and intentions to nonhuman entities such as robots is known as anthropomorphism. Two studies were conducted to check whether human-robot interaction is affected by anthropomorphic framing of robots. In Study 1, participants were presented with pictures of robots that varied in human similarity in appearance. According to the results, uncanny feelings toward a robot increased with the higher levels of human similarity. Furthermore, as the level of mind attribution increased, participants tended to attribute more humanlike abilities to nonhuman agents. In Study 2, a robot was described as either a machine-like robot or a humanlike robot in a priming story; then, it was examined whether significant differences exist in mind attribution and moral care. The participants tended to perceive robots as more humanlike in the mind attribution when anthropomorphism was used in a robot's behavior, according to the findings. Furthermore, in the condition of increased anthropomorphism, a higher level of moral care could be observed compared with that in the other condition. This means that humanlike appearances may increase uncanny feelings, whereas anthropomorphic attribution may facilitate social interactions between humans and robots. Limitations as well as the implications for future research are discussed.

의인화는 로봇과 같이, 인간이 아닌 대상에게 인간의 속성, 정서나 의도를 부여하는 것이다. 본 연구에서는 로봇에게 인간 고유의 속성을 부여하는 의인화가 로봇-인간의 상호작용에 끼치는 영향을 분석하였다. 연구 1에서는 다양한 로봇의 사진을 제시하고, 로봇의 외관에 따른 심리적 불쾌감, 마음지각 및 도덕적 처우를 자기보고식 질문지를 통해 분석하였다. 그 결과 로봇의 외관이 인간과 가장 유사한 안드로이드 로봇조건에서 휴머노이드 로봇의 조건과 기계적 외관의 조건보다 로봇에 대한 심리적 불쾌감이 가장 높았다. 또한 인간과 유사한 안드로이드 로봇에서 기계와 비슷한 로봇보다 로봇에 대한 마음지각이 더 높게 나타났다. 연구 2에서는 로봇의 속성을 의인화한 조건과 의인화하지 않은 조건에서 로봇에 대한 불쾌감, 마음지각과 도덕적 처우의 정도가 다르게 나타나는지를 비교하였다. 그 결과 로봇 속성의 의인화조건에서 로봇에 대한 마음지각과 도덕적 처우의 정도가 더 높게 나타났으며, 마음지각의 경험성이 높을수록 도덕적 처우의 정도가 더 높아졌다. 이 결과는 인간과 유사한 로봇의 외관은 로봇에 대한 심리적 불쾌감을 증가시키지만, 로봇의 속성에 대한 의인화는 로봇에 대한 마음지각을 증가시키고, 인간-로봇의 상호작용을 촉진 시킬 가능성을 제시한다. 논의에서는 의인화가 인간-로봇의 상호작용에 끼치는 차별화된 영향에 대한 시사점을 토론하고, 연구의 한계 및 후속연구의 방향을 다루었다.

Keywords

Acknowledgement

본 논문은 순천대학교 교연비 연구사업에 의해 지원되었습니다.

References

  1. Banks, J. (2019). A perceived moral agency scale: Development and validation of a metric for humans and social machines. Computers in Human Behavior, 90, 363-371. DOI: 10.1016/j.chb.2018.08.028
  2. Brink, K. A ., Gray, K., & Wellman, H. M. (2019). Creepiness creeps In: Uncanny valley feelings are acquired in childhood. Child Development, 90(4), 1202-1214. DOI: 10.1111/cdev.12999
  3. Carter, O., Hohwy, J., Van Boxtel, J., Lamme, V., Block, N., Koch, C., & Tsuchiya, N. (2018). Conscious machines: Defining questions. Science, 359(6374), 400. DOI: 10.1126/science.aar4163
  4. Damiano, L., & Dumouchel, P. (2018). Anthropomorphism in human-robot co-evolution. Frontiers in Psychology, 9, 468. DOI:10.3389/fpsyg.2018.00468
  5. Dehaene, S., Lau, H., & Kouider, S. (2017). What is consciousness, and could machines have it?. Science, 358(6368), 486-492. DOI:10.1126/science.aan8871
  6. Duffy, B. R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3), 177-190. DOI: 10.1016/S0921-8890(02)00374-3
  7. Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315, 619. DOI: 10.1126/ science.1134475
  8. Gray, K., & Schein, C. (2012). Two minds vs. two philosophies: Mind perception defines morality and dissolves the debate between deontology and utilitarianism. Review of Philosophy and Psychology, 3(3), 405-423. DOI: 10.1007/s13164-012-0112-5
  9. Gray, K., & Wegner, D. M. (2012a). Morality takes two: Dyadic morality and mind perception. In M. Mikulincer & P. R. Shaver (Eds.), Herzliya series on personality and social psychology. The social psychology of morality: Exploring the causes of good and evil (p. 109-127). American Psychological Association. DOI: 10.1037/13091-006
  10. Gray, K., & Wegner, D. M. (2012b). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125, 125-130. DOI: 10.1016/j.cognition. 2012.06.007
  11. Gray, K., Knickman, T. A., & Wegner, D. M. (2011). More dead than dead: Perceptions of persons in the persistent vegetative state. Cognition, 121(2), 275-280. DOI: 10.1016/j.cognition.2011.06.014
  12. Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of Personality and Social Psychology, 74, 1464-1480. DOI: 10.1037//0022-3514.74.6.1464
  13. Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003) Understanding and using the Implicit Association Test: An improved scoring algorithm. Journal of Personality and Social Psychology, 85, 197-216. DOI: 10.1037/0022-3514.85.2.197
  14. Haslam, N., Loughnan, S., Kashima, Y., & Bain, P. (2008). Attributing and denying humanness to others. European Review of Social Psychology, 19(1), 55-65. DOI: 10.1080/10463280801981645
  15. Jang, Y. (2006). A study on the depiction of concept in user centered for the supporting shopping robot design development. Science of Emotion & Sensibility, 9(3), 287-297.
  16. Kim, Y., Yoon, S., Lee, D., & Kwak, Y. (2001). Mechanism design of the interactive emotional robot. Science for Emotion & Sensibility, 233-238.
  17. Kuchenbrandt, D., Eyssel, F., Bobinger, S., & Neufeld, M. (2013). When a robot's group membership matters. International Journal of Social Robotics, 5(3), 409-417. DOI: 10.1007/s12369-013-0197-8
  18. Laakasuo, M., Palomaki, J., & Kobis, N. (2021). Moral uncanny valley: A robot's appearance moderates how its decisions are judged. International Journal of Social Robotics, in press. DOI: 10.1007/s12369- 020-00738-6
  19. Levillain, F., & Zibetti, E. (2017). Behavioral objects: The rise of the evocative machines. Journal of Human-Robot Interaction, 6(1), 4-24. DOI: 10.5898/ JHRI.6.1.Levillain
  20. MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interaction Studies: Social Behavior and Communication in Biological and Artificial Systems, 7(3), 297-337. DOI: 10.1075/is.7.3.03mac
  21. Majdandzic, J., Bauer, H., Windischberger, C., Moser, E., Engl, E., & Lamm, C. (2012) The human factor: Behavioral and neural correlates of humanized perception in moral decision making. PLoS ONE, 7, e47698. DOI: 10.1371/journal.pone.0047698
  22. Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the uncanny valley. Cognition, 146, 22-32. DOI: 10.1016/j.cognition.2015.09.008
  23. Mori, M. (1970). The uncanny valley. Energy, 7, 33-35.
  24. Muller, B. C., Gao, X., Nijssen, S., & Damen, T. (2020). I, robot: How human Appearance and mind attribution relate to the perceived danger of robots. International Journal of Social Robotics, in press. DOI: 10.1007/s12369-020-00663-8
  25. Nijssen, S. R., Muller, B. C., van Baaren, R. B., & Paulus, M. (2019). Saving the robot or the human? Robots who feel deserve moral care. Social Cognition, 37(1), 41-56. DOI: 10.1521/soco.2019.37.1.41
  26. Ochsner, K. N., Ray, R. R., Hughes, B., McRae, K., Cooper, J. C., Weber, J., Gabrieli, J. D. E., & Gross, J. J. (2009). Bottom-up and top-down processes in emotion generation: Common and distinct neural mechanisms. Psychological Science, 20(11), 1322-1331. DOI: 10.1111/j.1467-9280.2009.02459.x
  27. Pyszczynski, T., Greenberg, J., & Solomon, S. (1999). A dual-process model of defense against conscious and unconscious death-related thoughts: An extension of terror management theory. Psychological Review, 106(4), 835-845. DOI: 10.1037/0033-295x.106.4.835
  28. Sanders, R. (2017, December 21). Feeling like the Grinch? 'Customer rage' is real. Retrieved from https://www.usatoday.com/story/money/business/2017/12/21/feeling-like-grinch-customer-rage-real/971904001
  29. Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence, 16, 337-351. DOI: 10.1162/pres.16.4.337
  30. Spatola, N., & Wudarczyk, O. A. (2020). Implicit attitudes towards robots predict explicit attitudes, semantic distance between robots and humans, anthropomorphism, and prosocial behavior: From attitudes to human-robot interaction. International Journal of Social Robotics, in press. DOI: 10.1007/ s12369-020-00701-5
  31. Wang, S., Lilienfeld, S. O., & Rochat, P. (2015). The uncanny valley: Existence and explanations. Review of General Psychology, 19(4), 393-407. DOI: 10.1037/ gpr0000056
  32. Wang, X., & Krumhuber, E. G. (2018). Mind perception of robots varies with their economic versus social function. Frontiers in Psychology, 9, 1-9. https://doi.org/10.3389/fpsyg.2018.00001
  33. Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232. DOI: 10.1177/ 1745691610369336
  34. Weise, D., Pyszczynski, T., Cox, C., Arndt, J., Greenberg, J., Solomon, S., & Kosloff, S. (2008). Interpersonal politics: The role of terror management and attachment processes in shaping political preferences. Psychological Science, 19(5), 448-455. DOI: 10.1111/j.1467-9280.2008.02108.x
  35. Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer - and forgive - service robots with perceived feelings. Journal of Applied Psychology, in press. DOI: 10.1037/apl0000834