Browse > Article
http://dx.doi.org/10.19066/cogsci.2022.33.4.002

The effect of trust repair behavior on human-robot interaction  

Hoyoung, Maeng (Interdisciplinary Program in Cognitive Science, Seoul National University)
Whani, Kim (Department of Psychology, Seoul National University)
Jaeun, Park (Department of Psychology, Seoul National University)
Sowon, Hahn (Interdisciplinary Program in Cognitive Science, Seoul National University)
Publication Information
Korean Journal of Cognitive Science / v.33, no.4, 2022 , pp. 205-228 More about this Journal
Abstract
This study aimed to confirm the effect of social and relational behavior types of robots on human cognition in human-robot interaction. In the experiment, the participants evaluated trust in robots by watching a video on the robot Nao interacting with a human, in which the robot made an error and then made an effort to restore trust. The trust recovery behavior was set as three conditions: an internal attribution in which the robot acknowledges and apologizes for an error, a condition in which the robot apologizes for an error but attributes it externally, and a non-action condition in which the robot denies the error itself and does not take any action for the error. As the result, in all three cases, the error was perceived as less serious when the robot apologized than when it did not, and the ability of the robot was also highly evaluated. These results provide evidence that human attitudes towards robots can respond sensitively depending on the robot's behavior and how they overcome errors, suggesting that human perception towards robots can change. In particular, the fact that robots are more trustworthy when they acknowledge and apologize for their own errors shows that robots can promote positive human-robot interactions through human-like social and polite behavior.
Keywords
human-robot interaction; robot failure; trust violation; trust repair;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Ferrin, D.L., Kim, P. H., Cooper, C. D., & Dirks, K. T. (2007). Silence speaks Vols.:The effectiveness of reticence in comparison to apology and denial for responding to integrity- and competence-based trust violations. Journal of Applied Psychology, 92(4), 893-908.   DOI
2 Fuoli, M., Van de Weijer, J., & Paradis, C. (2017). Denial outperforms apology in repairing organizational trust despite strong evidence of guilt. Public Relations Review, 43(4), 645-660.   DOI
3 Hald, K., Weitz, K., Andre, E., & Rehm, M. (2021, November). "An Error Occurred!" - Trust Repair With Virtual Robot Using Levels of Mistake Explanation. In Proceedings of the 9th International Conference on Human-Agent Interaction, 218-226.
4 Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517-527.   DOI
5 Haring, K. S., Matsumoto, Y., & Watanabe, K. (2013). How do people perceive and trust a lifelike robot. Lecture Notes in Engineering and Computer Science, 1, 425-430.
6 Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407-434.   DOI
7 Hur, Y., & Han, J. (2009). Analysis on Children's Tolerance to Weak Recognition of Storytelling Robots. J. Convergence Inf. Technol., 4(3), 103-109.   DOI
8 Jessup, S.A., Schneider, T. R., Alarcon, G. M., Ryan, T. J., & Capiola, A. (2019). The measurement of the propensity to trust automation. In International Conference on Human-Computer Interaction, 476-489.
9 Lee, K. M., Peng, W., Jin, S. A., & Yan, C. (2006). Can Robots Manifest Personality?: An Empirical Test of Personality Recognition, Social Responses, and Social Presence in Human-Robot Interaction. Journal of Communication, 56(4), 754-772.   DOI
10 Leung, S. O. (2011). A comparison of psychometric properties and normality in 4-, 5-, 6-, and 11-point Likert scales. Journal of social service research, 37(4), 412-421.   DOI
11 Lewicki, R.J., & Brinsfield, C. (2017). Trust repair. Annual Review of Organizational Psychology and Organizational Behavior, 4, 287-313.   DOI
12 Li, J., Cuadra, A., Mok, B., Reeves, B., Kaye, J., & Ju, W. (2019). Communicating Dominance in a Non anthropomorphic Robot Using Locomotion. ACM Transactions on Human-Robot Interaction, 8(1), 1-14.
13 Maddux, W. W., Kim, P. H., Okumura, T., & Brett, J. M. (2011). Cultural differences in the function and meaning of apologies. International negotiation, 16(3), 405-425.   DOI
14 Madhavan, D., & Wiegmann, D. A. (2005). Effects of information source, pedigree, and reliability on operators utilization of diagnostic advice. Human Factors and Ergonomics Society Annual Meeting Proceedings, 49(3), 487-491.   DOI
15 Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal system on trust for management: A field quasi-experiment. Journal of Applied Psychology, 84(1), 123-136.   DOI
16 Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of management review, 20(3), 709-734.   DOI
17 Merritt, S. M., & Ilgen, D. R. (2008). Not all trust is created equal: Dispositional and history-based trust in human-automation interactions. Human Factors, 50(2), 194-210.   DOI
18 McCall, D., & Kolling, M. (2014). Meaningful categorization of novice programmer errors. In Proceedings of 2014 IEEE Frontiers in Education Conference, 1-8.
19 McColl, D., & Nejat, G. (2014). Recognizing Emotional Body Language Displayed by a Human-like Social Robot. International Journal of Social Robotics, 6(2), 261-280.   DOI
20 Mende, M., Scott, M. L., van Doorn, J., Grewal, D., & Shanks, I. (2019). Service Robots Rising: How Humanoid Robots Influence Service Experiences and Elicit Compensatory Consumer Responses. Journal of Marketing Research, 56(4), 535-556.   DOI
21 Mori, M., MacDorman, K. F., & Kageki, N. (2012). The Uncanny Valley [from the field]. IEEE Robotics & Automation Magazine, 19(2), 98-100.
22 Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies of trust and human intervention in a process control simulation. Ergonomics, 39, 429-460.   DOI
23 Nass, C. I., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81-103.   DOI
24 Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are Social Actors. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (pp. 72-78).
25 Purington, A., Taft J. G., Sannon, S., Bazarova, N. N., & Taylor, S. H. (2017). "Alexa is my new BFF," Social Roles, User Satisfaction, and Personification of the Amazon Echo. In Proceedings of the 2017 CHI Conference Expended Abstracts on Human Factors in Computing Systems (pp. 2853-2859).
26 Robinette, P., Howard, A. M., &Wagner, A. R. (2017). Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425-436.   DOI
27 Strait, M. K., Floerke, V. A., Ju, W., Maddox, K., Remedios, J. D., Jung, M. F., & Urry, H. L. (2017). Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots. Frontiers in Psychology, 8, 1366   DOI
28 Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joublin, F. (2013). To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability. International Journal of Social Robotics, 5(3), 313-323.   DOI
29 Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K. (2015, March). Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 1-8). IEEE.
30 Sanders, T.,Kaplan, A., Koch, R., Schwartz, M., & Hancock, P. A. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors: The Journal of Human Factors and Ergonomics Society, 61(4),614-626.   DOI
31 Schaefer K. E. (2016) Measuring Trust in Human Robot Interactions: Development of the "Trust Perception Scale-HRI". In Robust Intelligence and Trust in Autonomous Systems (pp. 191-218). Springer, Boston, MA.
32 Sebo, S. S., Krishnamurthi, P., & Scassellati, B. (2019). "I Don't Believe You":Investigating the Effects of Robot Trust Violation and Repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (pp. 57-65). IEEE.
33 Sebo, S. S., Traeger, M., Jung, M., & Scassellati, B. (2018). The ripple effects of vulnerability: The effects of a robot's vulnerable behavior on trust in human-robot teams. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 178-186.
34 Utz, S., Matzat, U., & Snijders, C. (2009). On-line reputation systems: The effects of feedback comments and reactions on building and rebuilding trust in on-line auctions. International Journal of Electronic Commerce, 13(3), 95-118.   DOI
35 Tomlinson, E. C., Dineen, B. R., & Lewicki, R. J. (2004). The road to reconciliation: Antecedents of victim willingness to reconcile following a broken promise. Journal of Management, 30(2), 165-187.   DOI
36 Torre, I., Goslin, J., White, L., & Zanatto, D. (2018). Trust in artificial voices: "A congruency effect" of first impressions and behavioral experience. In Proceedings of the Technology, Mind, and Society, 1-6.
37 Traeger, M.L., Sebo, S. S., Jung, M., Scassellati, B., & Christakis, N. A. (2020). Vulnerable Robots Positively Shape Human Conversational Dynamics in a Human-Robot Team. In Proceedings of the National Academy of Sciences, 117(12), 6370-6375.   DOI
38 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, 5998-6008.
39 Wang, L., Jamieson, G. A., & Hollands, J. G. (2009). Trust and reliance on an automated combat identification system. Human Factors, 51(3), 281-291.   DOI
40 Weun, S., Beatty, S. E., & Jones, M. A. (2004). The impact of service failure severity on service recovery evaluations and post-recovery relationships. The Journal of Services Marketing, 18(2), 133-146.   DOI
41 Xu, J., Broekens, J.,Hindriks, K., & Neerincx, M. A. (2014). Robot Mood is Contagious: Effects of Robot Body Language in the Imitation Game. In Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, 973-980.
42 Kim, P. H., Ferrin, D. L., Cooper, C. D., & Dirks, K. T. (2004). Removing the shadow of suspicion: The effects of apology versus denial for repairing competence-versus integrity- based trust violations. Journal of Applied Psychology, 89(1), 104-118.   DOI
43 Jian, J., Bisantz, A. & Drury, C. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53-71.   DOI
44 Johnson, D. O., & Cuijpers, R. H. (2019). Investigating the Effect of a Humanoid Robot's Head Position on Imitating Human Emotions. International Journal of Social Robotics, 11(1), 65-74   DOI
45 Kahkonen, T., Blomqvist, K., Gillespie, N., & Vanhala, M. (2021). Employee trust repair: A systematic review of 20 years of empirical research and future research directions. Journal of Business Research, 130, 98-109.   DOI
46 Kim, P. H., Dirks, K. T., Cooper, C. D., & Ferrin, D. L. (2006). When more blame is better than less: The implications of internal vs. external attributions for the repair of trust after a competence-vs. integrity-based trust violation. Organizational behavior and human decision processes, 99(1), 49-65.   DOI
47 Kim, P. H., Dirks, K. T., & Cooper, C. D. (2009) "The repair of trust: A dynamic bilateral perspective and multi level conceptualization". Academy of Management Review, 34(3), 401-422.   DOI
48 Kim, P. H.,Cooper, C. D., Dirks, K. T., & Ferrin, D. L. (2013). Repairing trust with individuals vs. groups. Organizational Behavior and Human Decision Processes, 120(1), 1-14.   DOI
49 Lee, J. D, & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50-80.   DOI
50 Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators' adaptation to automation. International Journal of Human-Computer Studies, 40(1), 153-184.   DOI
51 Baker, A. L., Phillips, E. K.,Ullman, D., & Keebler, J. R. (2018). Toward an understanding of trust repair in human-robot interaction. ACM Transactions on Interactive Intelligent Systems, 8(4), 1-30.
52 Andreasson, R., Alenljung, B., Billing, E., & Lowe, R. (2018). Affective Touch in Human-Robot Interaction: Conveying Emotion to the NAO Robot. International Journal of Social Robotics, 10(4), 473-491.   DOI
53 Alenljung, B., Andreasson, R., Billing, E A., Lindblom, J., & Lowe, R. (2017) User Experience of Conveying Emotions by Touch. In Proceedings of the 26th IEEE International Symposium on Robot and Human Interactive Communication (pp. 1240-1247). IEEE
54 Atkinson, D., Hancock, P., Hoffman, R. R., Lee, J. D., Rovira, E., Stokes, C., & Wagner, A. R. (2012). Trust in computers and robots: The uses and boundaries of the analogy to interpersonal trust. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56(1), 303-307. Sage.
55 Bansal, G., & Zahedi, F. M. (2015). Trust violation and repair: The information privacy perspective. Decision Support Systems, 71, 62-77.   DOI
56 Breazeal, C. (2004). Social Interactions in HRI: the robot view. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 34(2), 181-186.   DOI
57 Bartneck, C. (2003). Interacting with an Embodied Emotional Character. In Proceedings of the 2003 International Conference on Designing pleasurable products and interfaces (pp. 55-60).
58 Beck, A., Stevens, B., Bard, K. A., & Canamero, L. (2012). Emotional Body Language Displayed by Artificial Agents. ACM Transactions on Interactive Intelligent Systems, 2(1), 1-29.
59 Bies, R. J., & Tripp, T. (1996). Beyond distrust: 'Getting even' and the need for revenge. In. R. Kramer, & T. Tyler (Eds.) Trust in organizations (pp. 246-260).
60 Broadbent, E., Kumar, V., Li, X., Sollers 3rd, J., Stafford, R. Q., MacDonald, B. A., & Wegner, D. M. (2013). Robots with Display Screens: A Robot with a More Human like Face Display Is Perceived to Have More Mind and a Better Personality. PloS ONE, 8(8), e72589.   DOI
61 Butler, J.K., Jr., & Cantrell, R. S. (1984). A behavioral decision theory approach to modeling dyadic trust in superiors and subordinates. Psychological Reports, 55, 19-28.   DOI
62 Carli, L. L., LaFleur, S.J., & Loeber, C.C. (1995). Nonverbal Behavior, Gender, and Influence. Journal of Personality and Social Psychology, 68(6), 1030-1041.   DOI
63 Carney, D.R., Hall, J.A., & Smith-LeBeau, L. (2005). Beliefs about the nonverbal expression of social power. Journal of Nonverbal Behavior, 29(2), 105-123.   DOI
64 Cassell, J., & Bickmore, T. (2003). Negotiated collusion: Modeling social language and its relationship effects in intelligent agents. User modeling and user-adapted interaction, 13(1-2), 89-132.   DOI
65 De Visser, E. J., Pak, R., & Shaw, T. H. (2018). From "automation" to "autonomy": The importance of trust repair in human-machine interaction. Ergonomics, 61(10), 1409-1427.   DOI
66 Cuddy, A. J. C., Fiske, S. T., & Glick, P. (2007). The BIAS map: behaviors from intergroup affect and stereotypes. Journal of Personality and Social Psychology, 92(4), 631-648.   DOI
67 Desai, M.,Kaniarasu, P., Medvedev, M., Steinfeld, A., & Yanco, H. (2013). Impact ofrobot failures and feedback on real-time trust. In 8th ACM/IEEE International Conference on Human-Robot Interaction, 251-258.
68 DeSteno, D., Breazeal, C., Frank, R. H., Pizarro, D., Baumann, J., Dickens, L., & Lee,J. J. (2012). Detecting the Trustworthiness of Novel Partners in Economic Exchange. Psychological Science, 23(12), 1549-1556.   DOI
69 Dirks, K.T., Lewicki, R. J., & Zaheer, A. (2009). Repairing relationships within and between organizations: Building a conceptual foundation. Academy of Management Review, 34(1), 68-84.   DOI
70 Dweck, C.S., Chiu, C. Y., & Hong, Y. Y. (1995). Implicit theories and their role in judgments and reactions: A word from two perspectives. Psychological Inquiry, 6(4), 267-285.   DOI
71 Engelhardt, S., Hansson, E., & Leite, I. (2017, August). Better Faulty than Sorry: Investigating Social Recovery Strategies to Minimize the Impact of Failure in Human-Robot Interaction. In WCIHAI@ IVA, 19-27.
72 Fratczak, P., Goh, Y. M., Kinnell, P., Justham, L., & Soltoggio, A. (2021). Robot apology as a post-accident trust-recovery control strategy in industrial human-robot interaction. International Journal of Industrial Ergonomics, 82.