DOI QR코드

DOI QR Code

A Study on the In-Vehicle Voice Interaction Structure Considering Implicit context with Persistence of Conversation

대화 지속성 암묵적 단서를 고려한 차량 내 음성 인터랙션 구조 연구

  • Namkung, Kiechan (Industry Academic Cooperation Foundation, Kookmin University)
  • Received : 2020.12.08
  • Accepted : 2021.02.20
  • Published : 2021.02.28

Abstract

In this study, the conversation behavior of users is investigated by using in-vehicle voice interaction system. The purpose of this study is to identify the elements of conversations that the users expect in voice interactions with systems and present the structural improvements to enable the voice interactions similar to those between people. To observe the users' behavior of voice interaction in the vehicle, the data through contextual inquiry are collected and the interview contents are analyzed by using the open coding. We have been able to explore the usefulness of voice interaction features, which are of great importance in that they increase the user's satisfaction with the features and their usage persistence. This study is meaningful in analyzing the user's empirical needs for the technology of interpersonal model from the perspective of conversation.

본 연구에서는 차량 내 음성 인터랙션을 사용하는 사용자의 대화 행태를 탐색적으로 살펴보았다. 본 연구의 목적은 시스템과의 음성 인터랙션에서 사용자들이 기대하는 대화 요소를 파악하여 사람 간의 대화와 유사한 음성 인터랙션을 가능하게 하기 위한 구조적 개선점을 제시하는 것이다. 사용자의 차량 내 음성 인터랙션 행태를 관찰하기 위해 맥락 질문법을 통해 자료를 수집하고 개방 코딩을 사용하여 인터뷰 내용을 분석하였다. 이를 통해 음성 인터랙션 기능의 유용성을 탐구할 수 있었으며, 이러한 유용성은 기능에 대한 사용자의 만족도와 사용 지속성을 증가시킨다는 점에서 매우 중요하다. 본 연구는 기술에 대한 사용자의 경험적 요구를 대인관계 모델인 대화의 관점에서 분석하였다는 점에서 의미가 있다고 할 수 있다.

Keywords

References

  1. K. Namkung. (2020). Research on Emotional Factors and Voice Trend by Country to be considered in Designing AI's Voice - An analysis of interview with experts in Finland and Norway. Journal of the Korea Convergence Society, 11(9), 91-97. DOI : 10.15207/JKCS.2020.11.9.091
  2. B. Reeves & C. Nass. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge, UK: Cambridge University Press.
  3. K. Gelbrich, J. Hagel, & C. Orsingher. 2020. Emotional support from a digital assistant in technology-mediated services: Effects on customer satisfaction and behavioral persistence. International Journal of Research in Marketing. DOI: 10.1016/j.ijresmar.2020.06.004
  4. S. Cheon & M. Yeoun. 2020. Categorization of Interaction Factors through Analysis of AI Agent Using Scenarios. Journal of the Korea Convergence Society, 11(11), 63-74. DOI : 10.15207/JKCS.2020.11.11.063
  5. V. Alexandra & F. Adam. (2018, April). Exploring the Role of Conversational Cues in Guided Task Support with Virtual Assistants. CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. (pp. 1-7). New York : ACM. DOI: 10.1145/3173574.3173782
  6. C. Hannon. 2016. Gender and status in voice user interfaces. Interactions, 23 (3), 34-37. DOI : 10.1145/2897939
  7. R. L. David, L. Clark, A. Quandt, B. Gary, & S. Lee. (2017). Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving. Applied ergonomics, 63, 53-61. DOI : 10.1016/j.apergo.2017.04.003
  8. S. Lee, N. Kim, M. Yang, Y. Yoo, & J. Kim. (2018, February). A Qualitative Study of User Behavior on Interacting with In-vehicle Voice User Interface. Proceedings of HICK 2018. (pp. 338-342). Korea : The HCI Society of Korea.
  9. J. Shin, S.Lee, & J.Kim. (2020, February). Explore the Usability of the Driver's Voice Interface in the Vehicle. Proceedings of HICK 2020. (pp. 567-577). Korea : The HCI Society of Korea.
  10. M. Mizukami, K. Yoshino, G. Neubig, D. Traum, & S. Nakamura. (2016). Analyzing the Effect of Entrainment on Dialogue Acts. In SIGDIAL Conference. (pp. 310-318).
  11. D. Traum & E. Hinkelman. 1992. Conversation Acts in Task-Oriented Spoken Dialogue. Computational intelligence, 8(3), 575-599. https://doi.org/10.1111/j.1467-8640.1992.tb00380.x
  12. D. Bohus & E. Horvitz. (2014, November). Managing human-robot engagement with forecasts and... um... hesitations. In Proceedings of the 16th international conference on multimodal interaction. (pp. 2-9). New York : ACM. DOI : 10.1145/2663204.2663241
  13. E. Luger & A. Sellen. (2016, May). Like having a really bad PA: the gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. (pp. 5286-5297) New York : ACM. DOI : 10.1145/2858036.2858288
  14. K. Morrissey & J. Kirakowski. (2013, July). "Realness" in Chatbots: Establishing Quantifiable Criteria. HCI'13: Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques. (pp. 87-96). Springer. DOI : 10.1007/978-3-642-39330-3_10
  15. L. Yu, K. Hermann, P. Blunsom, & S. Pulman. (2014, December). Deep learning for answer sentence selection. arXiv, preprint arXiv:1412.1632.
  16. S. Elo, & H. Kyngas. 2008. The qualitativecontent analysis process. Journal of advanced nursing, 62(1), 107-115. DOI : 10.1111/j.1365-2648.2007.04569.x