Browse > Article

An Arrangement Method of Voice and Sound Feedback According to the Operation : For Interaction of Domestic Appliance  

Hong, Eun-ji (연세대학교 정보대학원 UX/콘텐츠)
Hwang, Hae-jeong (연세대학교 정보대학원 UX/콘텐츠)
Kang, Youn-ah (연세대학교 테크노아트 학부)
Publication Information
Journal of the HCI Society of Korea / v.11, no.2, 2016 , pp. 15-22 More about this Journal
Abstract
The ways to interact with digital appliances are becoming more diverse. Users can control appliances using a remote control and a touch-screen, and appliances can send users feedback through various ways such as sound, voice, and visual signals. However, there is little research on how to define which output method to use for providing feedback according to the user' input method. In this study, we designed an experimental study that seeks to identify how to appropriately match the output method - voice and sound - based on the user input - voice and button. We made four types of interaction with two kinds input methods and two kinds of output methods. For the four interaction types, we compared the usability, perceived satisfaction, preference and suitability. Results reveals that the output method affects the ease of use and perceived satisfaction of the input method. The voice input method with sound feedback was evaluated more satisfying than with the voice feedback. However, the keying input method with voice feedback was evaluated more satisfying than with sound feedback. The keying input method was more dependent on the output method than the voice input method. We also found that the feedback method of appliances determines the perceived appropriateness of the interaction.
Keywords
Voice User Interface; Interaction; Speech Feedback; Sound Feedback; Perceived satisfaction;
Citations & Related Records
연도 인용수 순위
  • Reference
1 IFA. Trends in home appliance at IFA 2015. http://b2b.ifa-berlin.com/Press/PressReleases/News_9409.html 2015.10.31.
2 Ron, V. B. and LaLomia, M. A comparison of speech and mouse/keyboard GUI navigation. In Conference Companion on Human Factors in Computing Systems. ACM. pp. 96. 1995.
3 Damper, R. I., Tranchant, M. A. and Lewis, S. M. Speech versus keying in command and control: Effect of concurrent tasking. International Journal of Human-Computer Studies. 45(3), pp. 337-348. 1996.   DOI
4 William, W. G. Auditory icons: Using sound in computer interfaces. Human-computer interaction 2(2) pp. 167-177. 1986.   DOI
5 Elizabeth D. M. Designing with auditory icons: how well do we identify auditory cues?. In Conference companion on Human factors in computing systems. New York, pp. 269-270. 1994.
6 Noulhiane, M., Mella, N., Samson, S., Ragot, R. and Pouthas, V. How emotional auditory stimuli modulate time perception. Emotion. the American Psychological Association. 7(4). pp.697-704. 2007.
7 Belz, S. M., Robinson, G. S. and Casali, J. G. A new class of auditory warning signals for complex systems: Auditory icons. Human Factors: The Journal of the Human Factors and Ergonomics Society. 41(4). pp. 608-618. 1999.   DOI
8 Vilimek, R. and Hempel, T. Effects of speech and non-speech sounds on short-term memory and possible implications for in-vehicle use. Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display. International Community for Auditory Display. 2005
9 채행석, 홍지영, 이주환, 전명훈, 김민선, 허우범, 한광희. 가전제품의 VUI 가이드라인에 대한 연구. 한국 HCI 학회 학술대회 논문집. 한국HCI학회. pp. 1265-1272. 2007.
10 한국컨텐츠진흥원. 문화기술심층리포트: 음성 인식 기술의 동향과 전망. http://www.kocca.kr/knowledge/pubication/ct/__icsFiles/afieldfile/2011/12/07/87NEmyIcVWMc.pdf. 2016.4.27
11 Furui, S. Prospects for Spoken Dialogue Systems in a Multimedia Enviroment. In ESCA Workshop. ESCA Workshop. pp. 9-16. 1995
12 Shneiderman, B. The limits of speech recognition. Communications of the ACM, 43(9), pp. 63-65. 2000.
13 Huang, X., Baker, J. and Reddy, R. A Historical Perspective of Speech Recognition. Communications of the ACM. 57(1). pp. 94-103. 2014.   DOI
14 Damper, R. I. and Wood, S. D. Speech versus keying in command and control applications. International Journal of Human-Computer Studies. 42(3). pp. 289-305. 1995.   DOI
15 조성일, 홍사윤, 홍지영, 양경인, 장구앙, 최진해. Operation Command에 따른 스마트 TV 입력방식 연구. 한국 HCI 학회 학술대회 논문집. 한국 HCI 학회. pp. 631-634. 2012.
16 Karl, L. R., Pettey, M. and Shneiderman, B. Speech versus mouse commands for word processing: an empirical evaluation. International Journal of Man-Machine Studies, 39(4), pp. 667-687. 1993.   DOI
17 양경인, 홍사윤, 홍지영, 조성일, 장구앙, 최진해. 입력기기에 따른 자연적 인터랙션방식 연구. 한국 HCI 학술대회 논문집. 한국 HCI 학회. pp.643-647. 2012
18 Garzonis, S., Jones, S., Jay, T. and O'Neill, E. Auditory icon and earcon mobile service notifications: intuitiveness, learnability, memorability and preference. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM. 2009.
19 Meera, M. B., Denise, A. S. and Robert, M. G. Earcons and icons: Their structure and common design principles. Human-Computer Interaction 4(1) pp. 11-44. 1989   DOI
20 Stephen, A. B., Peter C. W. and Alistair, D. N. E. An evaluation of earcons for use in auditory human-computer interfaces. In Proceedings of the INTERACT'93 and CHI'93 conference on Human factors in computing systems. ACM. 1993.
21 이혜민, 김승인. 음성인식 기반의 모바일 지능형 개인비서 서비스 사용성 비교. 디지털디자인학연구, 14(1), pp. 231-240. 2014
22 Dingler, T., Lindsay, J. and Walker, B. N. Learnabiltiy of sound cues for environmental features: Auditory icons, earcons, spearcons, and speech. In Proceedings of the 14th International Conference on Auditory Display. pp. 1-6. 2008.
23 홍지영, 채행석, 이승룡, 박영현, 김준희, 류형수, 김종환, 한광희. 음성과 소리의 할당 방법: 가전제품 UI 를 중심으로. 한국 HCI 학회 학술대회 논문집. 한국 HCI 학회. pp. 1558-1563. 2007.
24 Alan, D. Human-computer interaction. US: Springer. 2009.