Browse > Article
http://dx.doi.org/10.5391/JKIIS.2011.21.6.718

Visual Multi-touch Input Device Using Vision Camera  

Seo, Hyo-Dong (군산대학교 제어로봇공학과)
Joo, Young-Hoon (군산대학교 제어로봇공학과)
Publication Information
Journal of the Korean Institute of Intelligent Systems / v.21, no.6, 2011 , pp. 718-723 More about this Journal
Abstract
In this paper, we propose a visual multi-touch air input device using vision cameras. The implemented device provides a barehanded interface which copes with the multi-touch operation. The proposed device is easy to apply to the real-time systems because of its low computational load and is cheaper than the existing methods using glove data or 3-dimensional data because any additional equipment is not required. To do this, first, we propose an image processing algorithm based on the HSV color model and the labeling from obtained images. Also, to improve the accuracy of the recognition of hand gestures, we propose a motion recognition algorithm based on the geometric feature points, the skeleton model, and the Kalman filter. Finally, the experiments show that the proposed device is applicable to remote controllers for video games, smart TVs and any computer applications.
Keywords
Visual multi-touch; Labeling; COG; Hand region; Kalman filter;
Citations & Related Records
연도 인용수 순위
  • Reference
1 M. Kim, and J. Yoon, "Implementation of Mobile Game Interface through an Operating Interface Analysis of Touch-screen Devices," Journal of Society of Design Science, No. 81, pp. 231-244, 2009.
2 J. Rantala, R. Raisamo, J. Lylykangas, V. Surakka, J. Raisamo, K. Salminen, T. Pakkanen, and A. Hippula, "Methods for Presenting Braille Characters on a Mobile Device with a Touchscreen and Tactile Feedback," IEEE Trans. on Haptics, Vol. 2, No. 1, pp. 28-39, 2009.   DOI
3 F. Wang, H. Deng, K. Ki, and Q. Ting, "A Study on Image Splicing Algorithm for Large Screen Multi-touch Technique," Machine Vision and Human- machine Interface, pp. 526-529, April 2010.
4 V. I Pavlovic, R. Sharma, and T.S. Huang, "Visual Interpretation of Hand Gestures for Human-computer Interaction: a Review," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 677-695, July 1997.   DOI   ScienceOn
5 M. Ishikawa and H. Matsumura, "Recognition of a Hand-gesture Based on Self-organization Using a Data Glove," ICONIP '99, Vol. 2, pp. 739-745, 1999.
6 S. S. Fels and G. E. Hinton, "Glove-talk: A Neural Network Interface between a Data Glove and a Speech Synthesizer," IEEE Trans. Neural Networks, Vol. 4, No. 1, pp. 2-8, January 1993.   DOI   ScienceOn
7 R. Cowie, "Emotion Recognition In Human-Computer Interaction," Signal Processing Magazine, IEEE, Vol. 18 pp. 32-80, January 2001.   DOI   ScienceOn
8 T. Huang, and V. Pavloic, "Hand Gesture Modeling, Analysis, and Synthesis," International Workshop on Automatic Face and Gesture-recongnition, pp. 73-79, June 1995.
9 F. Karray, M. Alemzadeh, J. Saleh, and M. Arab, "Human-Computer Interaction: Overview on State of the Art," Int. Journal on Smart Sensing and Intelligent Systems, Vol. 1, No. 1, pp. 137-159, March 2008.   DOI
10 Francis K. H. Quek, "Toward a Vision-Based hand Gesture Interface," Conference on Virtual Realty Software and Technology, pp. 23-26, August. 1994.
11 J. Han, "Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection," In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology, pp. 115-118, 2005.
12 S. Oviatt, "Advances in Robust Multimodal Interface Design," IEEE Computer Graphics and Applications, Vol.23 pp. 62-68, October 2003.
13 M. Kolsch, R. Bane, T. Hollerer, and M. Turk, "Multimodal Interaction with a Wearable Augmented Reality System," IEEE Computer Graphics and Applications, Vol. 26, No. 3 pp. 62-71, June 2006.   DOI
14 C. Spence, "Cross-modal Attention and Multisensory Integration: Implications for Multimodal Interface Design," Int. Conference on Multi-modal Interfaces, Vol. 5, November 2003.