DOI QR코드

DOI QR Code

NUI/NUX framework based on intuitive hand motion

직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크

  • Lee, Gwanghyung (Department of Computer Engineering, Sejong University) ;
  • Shin, Dongkyoo (Department of Computer Engineering, Sejong University) ;
  • Shin, Dongil (Department of Computer Engineering, Sejong University)
  • Received : 2014.02.07
  • Accepted : 2014.04.18
  • Published : 2014.06.30

Abstract

The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

내츄럴 유저 인터페이스는 마우스, 키보드, 펜과 마커를 이용하지 않는 신체를 이용한 자연스런 모션 인터페이스이다. 지금까지 대표적인 동작 인식 방법은 마커를 이용하는 방식이었고 그 인식 방법은 각 마커의 좌표를 상대적인 데이터로 입력 받아 데이터베이스에 각 좌표 값을 저장하는 것이었다. 그러나 정확한 동작을 인식하기 위해서는 더 많은 마커들이 필요하고 그 마커들을 붙이는 것과 데이터들을 처리하는 데에 상당한 시간이 걸린다. 또, NUI/NUX 프레임워크를 개발하면서, 가장 중요한 직관성을 배제한 개발로 사용상에 문제가 생겼고 계속해서 사용자에게 암기만을 강요하고 있다. 이 문제를 보완하기 위해, 본 논문에서는 마커를 이용하지 않았고 남녀노소 누구나 다룰 수 있도록 구현했다. 또, 목소리, 신체 동작, 얼굴 표정 등을 동시에 인식하는 멀티모달 NUI/NUX 프레임 워크를 설계했고, 직관적인 손동작을 인식하는 것과 모니터에 그것을 매핑하기 위해 새로운 마우스 이벤트 알고리즘을 제안했다. 우리는 사용자들이 쉽고 직관적으로 핸드마우스 이벤트를 다루도록 구현했다.

Keywords

References

  1. I-Tsun Chaing, Jong-chang Tsai, "Using Xbox360 Kinect Games on Enhancing Visual Performance Skills on Institutionalized Older Adults with Wheelchairs", Fourth IEEE Int'l Conference On Digital Game And Intelligent Toy Enhanced Learning, pp.263-267, 2012.
  2. Mohd Fairuz Shiratuddinm Kok Wai Won, "Non-Contact Multi-Hand Gestures Interaction Techniques for Architectural Design in a virtual Environment", the International Conference on IT & Multimedia at UNITEN (ICIMU 2011), Malaysia, Nov 2011.
  3. J.Ohya and Y.Kitamura, etc, "Real-Time Reproduction of 3D Human Images in Virtual Space Teleconferencing" in Proc. Of'93 IEEE Virtual Reality Annual Int. Symp. pp. 408-414, 1993
  4. O.Bau and W.E.Mackay, "OctoPocus: A Dynamic Guide for Learning Gesture-Based Command Sets", UIST 2008.
  5. C. Henrique and Q.Forster, "Design of Gesture Vocabularies through Analysis of Recognizer Performance in Gesture Space", Intelligent Systems Design and Applications, pp.641-646, 2007.
  6. Rick Kjeldsen, John Kender, "Toward the Use of Gesture in Traditional User Interfaces", Automatic Face and Gesture Recognition, 1996., Proceedings of the Second International Conference on 14-16 Oct 1996, pp.151-156.
  7. In-Bae Jeon, Boo-Hee Nam, "Implementation of Hand Mouse Based on Depth Sensor of the Kinect", Proceeding of the KIEEME Annual Summer Conference, 2012. 7. 18-20.
  8. Jae-Sun Lee, Jae-Hwan Lee, Yon-Ho Myeong, Hyeon-Kyeong Seong, "Development of Motion Recognition Hand-Mouse Using OpenCV", a graduation thesis, 2010.
  9. Elena Sanchez-Nielsen, Luis Anton-Canalis, and Cayetano Guerra-Artal, "An Autonomous and User-Independent hand Posture Recognition System for Vision-Based Interface Tasks", CAEPIA'05 Proceedings of the 11th Spanish association conference on Current Pages 113-122.
  10. A. Bolt Richard, "Put-That-There: Voice and Gesture at the Graphics Interface", International Conference on Computer Graphics and Interactive Techniques, Association for Computer Machinery, pp. 262-270, 1980.
  11. Virpi Roto, Effie Law, Arnold Vermeeren, Jettie Hoonhout, "USER EXPERIENCE WHITE PAPER", Bringing clarity to the concept of user experience, Result from Dagstuhl Seminar on Demarcating User Experience, Feburary 11, 2011.
  12. JinOk Kim, "Impact Analysis of nonverbal multimodals for recognition of emotion expressed virtual humans", Journal of Korean Society for Internet Information 2012. Oct: 13(5): 9-19. https://doi.org/10.7472/jksii.2012.13.5.9
  13. Hoyoung Hwang, Hyo-Joong Suh, "A Design and Implementation of Efficient Portable Braille Point System for the Visually Impaired Presons", Journal of Korean Society for Internet Information 2008. Oct:9(5).
  14. Jae-Young Choi, Taeg-Keun Whangbo, Nak-Bin Kim, "A Study on Improvement of Face Recognition Rate with Tranforamtion of Various Facial Poses and Expressions", Journal of Korean Society for Internet Information 2004. Dec: 5(6): 79-91.

Cited by

  1. Web-based 3D Virtual Experience using Unity and Leap Motion vol.21, pp.2, 2016, https://doi.org/10.7315/CADCAM.2016.159
  2. NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features vol.16, pp.6, 2015, https://doi.org/10.7472/jksii.2015.16.6.11
  3. Hand-Mouse Interface Using Virtual Monitor Concept for Natural Interaction vol.5, 2017, https://doi.org/10.1109/ACCESS.2017.2768405