Browse > Article
http://dx.doi.org/10.14695/KJSOS.2018.21.1.59

Applying Social Strategies for Breakdown Situations of Conversational Agents: A Case Study using Forewarning and Apology  

Lee, Yoomi (School of Design, Kyungil University)
Park, Sunjeong (Department of Industrial Design, KAIST)
Suk, Hyeon-Jeong (Department of Industrial Design, KAIST)
Publication Information
Science of Emotion and Sensibility / v.21, no.1, 2018 , pp. 59-70 More about this Journal
Abstract
With the breakthrough of speech recognition technology, conversational agents have become pervasive through smartphones and smart speakers. The recognition accuracy of speech recognition technology has developed to the level of human beings, but it still shows limitations on understanding the underlying meaning or intention of words, or understanding long conversation. Accordingly, the users experience various errors when interacting with the conversational agents, which may negatively affect the user experience. In addition, in the case of smart speakers with a voice as the main interface, the lack of feedback on system and transparency was reported as the main issue when the users using. Therefore, there is a strong need for research on how users can better understand the capability of the conversational agents and mitigate negative emotions in error situations. In this study, we applied social strategies, "forewarning" and "apology", to conversational agent and investigated how these strategies affect users' perceptions of the agent in breakdown situations. For the study, we created a series of demo videos of a user interacting with a conversational agent. After watching the demo videos, the participants were asked to evaluate how they liked and trusted the agent through an online survey. A total of 104 respondents were analyzed and found to be contrary to our expectation based on the literature study. The result showed that forewarning gave a negative impression to the user, especially the reliability of the agent. Also, apology in a breakdown situation did not affect the users' perceptions. In the following in-depth interviews, participants explained that they perceived the smart speaker as a machine rather than a human-like object, and for this reason, the social strategies did not work. These results show that the social strategies should be applied according to the perceptions that user has toward agents.
Keywords
Conversational Agents; Smart Home; Agent-based Interface; Social Interface;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Luger, E., & Sellen, A. (2016). "Like Having a Really bad PA": The Gulf between User Expectation and Experience of Conversational Agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5286-5297. DOI: 10.1145/2858036.2858288
2 Mennicken, S., Zihler, O., Juldaschewa, F., Molnar, V., Aggeler, D., & Huang, E. M. (2016). It's like living with a friendly stranger. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 120-131. DOI: 10.1145/2971648.2971757
3 Nass, C., & Moon, Y. (2000). Machines and mindlessness: social responses to computers. Journal of Social Issues, 56(1), 81-103. DOI: 10.1111/0022-4537.00153   DOI
4 Park, J.Y. (2007) Effects of the interaction with computer agents on users' psychological experiences. Science of Emotion & Sensibility, 10(2), 155-168.
5 Price, R. (2017). Microsoft's AI is getting crazily good at speech recognition. Retrieved from http://uk.businessinsider.com/microsofts-speech-recognition-5-1-error-rate-human-level-accuracy-2017-8
6 Purington, A., Taft, J. G., Sannon, S., Bazarova, N. N., & Taylor, S. H. (2017). "Alexa is my new BFF": Social roles, user satisfaction, and personification of the Amazon echo. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2853-2859. DOI: 10.1145/3027063.3053246
7 Song, H.S., Kim, M.J., Jeong, S.H., Suk, H.J., Kwon, D.S., & Kim, M.S. (2008). The behavioral patterns of neutral affective state for service robot using video ethnography. Science of Emotion & Sensibility, 11(4), 629-636.
8 von der Putten, A. M., Kramer, N. C., Gratch, J., & Kang, S.-H. (2010). "It doesn't matter what you are!" Explaining social effects of agents and avatars. Computers in Human Behavior, 26(6), 1641-1650. DOI: 10.1016/j.chb.2010.06.012   DOI
9 Appel, J., von der Putten, A., Kramer, N. C., & Gratch, J. (2012). Does humanity matter? Analyzing the importance of social cues and perceived agency of a computer system for the emergence of social reactions during human-computer interaction. Advances in Human-Computer Interaction, 13(2012), 1-10. DOI: 10.1155/2012/324694
10 Woods, S., Walters, M., Koay, K. L., & Dautenhahn, K. (2006). Comparing human robot interaction scenarios using live and video based methods: towards a novel methodological approach. In Proceedings of the 9th International Workshop on Advanced Motion Control. 750-755. DOI: 10.1109/AMC.2006.1631754
11 Bickmore, T., & Cassell, J. (2001). Relational agents: a model and implementation of building user trust. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 396-403. DOI: 10.1145/365024.365304
12 Li, A. (2017). Google's speech recognition is now almost as accurate as humans. Retrieved from https://9to5google.com/2017/06/01/google-speech-recognition-humans/
13 Cassell, J. (2000). Embodied conversational interface agents. Communications of the ACM, 43(4), 70-78. DOI: 10.1145/332051.332075   DOI
14 Jarnow, J. (2016, April 8). Why our crazy-smart AI still sucks at transcribing speech. Retrieved from https://www.wired.com/2016/04/long-form-voice-transcription/
15 Lee, M. K., Kielser, S., Forlizzi, J., Srinivasa, S., & Rybski, P. (2010). Gracefully mitigating breakdowns in robotic services. Proceedings of the 5th ACM/IEEE International Conference on Human-robot Interaction, 203-210. DOI: 10.1145/1734454.1734544