• Title/Summary/Keyword: Hand gesture

Search Result 406, Processing Time 0.034 seconds

Hand Gesture based Manipulation of Meeting Data in Teleconference (핸드제스처를 이용한 원격미팅 자료 인터페이스)

  • Song, Je-Hoon;Choi, Ki-Ho;Kim, Jong-Won;Lee, Yong-Gu
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.126-136
    • /
    • 2007
  • Teleconferences have been used in business sectors to reduce traveling costs. Traditionally, specialized telephones that enabled multiparty conversations were used. With the introduction of high speed networks, we now have high definition videos that add more realism in the presence of counterparts who could be thousands of miles away. This paper presents a new technology that adds even more realism by telecommunicating with hand gestures. This technology is part of a teleconference system named SMS (Smart Meeting Space). In SMS, a person can use hand gestures to manipulate meeting data that could be in the form of text, audio, video or 3D shapes. Fer detecting hand gestures, a machine learning algorithm called SVM (Support Vector Machine) has been used. For the prototype system, a 3D interaction environment has been implemented with $OpenGL^{TM}$, where a 3D human skull model can be grasped and moved in 6-DOF during a remote conversation between distant persons.

Hand Gesture Segmentation Method using a Wrist-Worn Wearable Device

  • Lee, Dong-Woo;Son, Yong-Ki;Kim, Bae-Sun;Kim, Minkyu;Jeong, Hyun-Tae;Cho, Il-Yeon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.541-548
    • /
    • 2015
  • Objective: We introduce a hand gesture segmentation method using a wrist-worn wearable device which can recognize simple gestures of clenching and unclenching ones' fist. Background: There are many types of smart watches and fitness bands in the markets. And most of them already adopt a gesture interaction to provide ease of use. However, there are many cases in which the malfunction is difficult to distinguish between the user's gesture commands and user's daily life motion. It is needed to develop a simple and clear gesture segmentation method to improve the gesture interaction performance. Method: At first, we defined the gestures of making a fist (start of gesture command) and opening one's fist (end of gesture command) as segmentation gestures to distinguish a gesture. The gestures of clenching and unclenching one's fist are simple and intuitive. And we also designed a single gesture consisting of a set of making a fist, a command gesture, and opening one's fist in order. To detect segmentation gestures at the bottom of the wrist, we used a wrist strap on which an array of infrared sensors (emitters and receivers) were mounted. When a user takes gestures of making a fist and opening one's a fist, this changes the shape of the bottom of the wrist, and simultaneously changes the reflected amount of the infrared light detected by the receiver sensor. Results: An experiment was conducted in order to evaluate gesture segmentation performance. 12 participants took part in the experiment: 10 males, and 2 females with an average age of 38. The recognition rates of the segmentation gestures, clenching and unclenching one's fist, are 99.58% and 100%, respectively. Conclusion: Through the experiment, we have evaluated gesture segmentation performance and its usability. The experimental results show a potential for our suggested segmentation method in the future. Application: The results of this study can be used to develop guidelines to prevent injury in auto workers at mission assembly plants.

Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model (임의 두 지점의 웹 카메라와 퍼지 가비지 모델을 이용한 사용자의 의미 있는 동작 검출)

  • Yang, Seung-Eun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 2012
  • Many research of hand gesture recognition based on vision system have been conducted which enable user operate various electronic devices more easily. 3D position calculation and meaningful gesture classification from similar gestures should be executed to recognize hand gesture accurately. A simple and cost effective method of 3D position calculation and gesture spotting (a task to recognize meaningful gesture from other similar meaningless gestures) is described in this paper. 3D position is achieved by calculation of two cameras relative position through pan/tilt module and a marker regardless with the placed position. Fuzzy garbage model is proposed to provide a variable reference value to decide whether the user gesture is the command gesture or not. The reference is achieved from fuzzy command gesture model and fuzzy garbage model which returns the score that shows the degree of belonging to command gesture and garbage gesture respectively. Two-stage user adaptation is proposed that off-line (batch) adaptation for inter-personal difference and on-line (incremental) adaptation for intra-difference to enhance the performance. Experiment is conducted for 5 different users. The recognition rate of command (discriminate command gesture) is more than 95% when only one command like meaningless gesture exists and more than 85% when the command is mixed with many other similar gestures.

A Personalized Hand Gesture Recognition System using Soft Computing Techniques (소프트 컴퓨팅 기법을 이용한 개인화된 손동작 인식 시스템)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.53-59
    • /
    • 2008
  • Recently, vision-based hand gesture recognition techniques have been developed for assisting elderly and disabled people to control home appliances. Frequently occurred problems which lower the hand gesture recognition rate are due to the inter-person variation and intra-person variation. The recognition difficulty caused by inter-person variation can be handled by using user dependent model and model selection technique. And the recognition difficulty caused by intra-person variation can be handled by using fuzzy logic. In this paper, we propose multivariate fuzzy decision tree learning and classification method for a hand motion recognition system for multiple users. When a user starts to use the system, the most appropriate recognition model is selected and used for the user.

Finger Directivity Recognition Algorithm using Shape Decomposition (형상분해를 이용한 손가락 방향성 인식 알고리즘)

  • Choi, Jong-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.3
    • /
    • pp.197-201
    • /
    • 2011
  • The use of gestures provides an attractive alternate to cumbersome interfaces for human-computer devices interaction. This has motivated a very active research area concerned with computer vision-based recognition of hand gestures. The most important issues in hand gesture recognition is to recognize the directivity of finger. The primitive elements extracted to a hand gesture include in very important information on the directivity of finger. In this paper, we propose the recognition algorithm of finger directivity by using the cross points of circle and sub-primitive element. The radius of circle is increased from minimum radius including main-primitive element to it including sub-primitive elements. Through the experiment, we demonstrated the efficiency of proposed algorithm.

Part-based Hand Detection Using HOG (HOG를 이용한 파트 기반 손 검출 알고리즘)

  • Baek, Jeonghyun;Kim, Jisu;Yoon, Changyong;Kim, Dong-Yeon;Kim, Euntai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.551-557
    • /
    • 2013
  • In intelligent robot research, hand gesture recognition has been an important issue. And techniques that recognize simple gestures are commercialized in smart phone, smart TV for swiping screen or volume control. For gesture recognition, robust hand detection is important and necessary but it is challenging because hand shape is complex and hard to be detected in cluttered background, variant illumination. In this paper, we propose efficient hand detection algorithm for detecting pointing hand for recognition of place where user pointed. To minimize false detections, ROIs are generated within the compact search region using skin color detection result. The ROIs are verified by HOG-SVM and pointing direction is computed by both detection results of head-shoulder and hand. In experiment, it is shown that proposed method shows good performance for hand detection.

A hand gesture recognition method for an intelligent smart home TV remote control system (스마트 홈에서의 TV 제어 시스템을 위한 손 제스처 인식 방법)

  • Kim, Dae-Hwan;Cho, Sang-Ho;Cheon, Young-Jae;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.516-520
    • /
    • 2007
  • This paper presents a intuitive, simple and easy smart home TV remote control system using the hand gesture recognition. Hand candidate regions are detected by cascading policy of the part of human anatomy on the disparity map image, Exact hand region is extracted by the graph-cuts algorithm using the skin color information. Hand postures are represented by shape features which are extracted by a simple shape extraction method. We use the forward spotting accumulative HMMs for a smart home TV remote control system. Experimental results show that the proposed system has a good recognition rate of 97.33 % for TV remote control in real-time.

  • PDF

A Study of Hand Gesture Recognition for Human Computer Interface (컴퓨터 인터페이스를 위한 Hand Gesture 인식에 관한 연구)

  • Chang, Ho-Jung;Baek, Han-Wook;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3041-3043
    • /
    • 2000
  • GUI(graphical user interface) has been the dominant platform for HCI(human computer interaction). The GUI-based style of interaction has made computers simpler and easier to use. However GUI will not easily support the range of interaction necessary to meet users' needs that are natural, intuitive, and adaptive. In this paper we study an approach to track a hand in an image sequence and recognize it, in each video frame for replacing the mouse as a pointing device to virtual reality. An algorithm for real time processing is proposed by estimating of the position of the hand and segmentation, considering the orientation of motion and color distribution of hand region.

  • PDF

Gesture Recognition based on Mixture-of-Experts for Wearable User Interface of Immersive Virtual Reality (몰입형 가상현실의 착용식 사용자 인터페이스를 위한 Mixture-of-Experts 기반 제스처 인식)

  • Yoon, Jong-Won;Min, Jun-Ki;Cho, Sung-Bae
    • Journal of the HCI Society of Korea
    • /
    • v.6 no.1
    • /
    • pp.1-8
    • /
    • 2011
  • As virtual realty has become an issue of providing immersive services, in the area of virtual realty, it has been actively investigated to develop user interfaces for immersive interaction. In this paper, we propose a gesture recognition based immersive user interface by using an IR LED embedded helmet and data gloves in order to reflect the user's movements to the virtual reality environments effectively. The system recognizes the user's head movements by using the IR LED embedded helmet and IR signal transmitter, and the hand gestures with the data gathered from data gloves. In case of hand gestures recognition, it is difficult to recognize accurately with the general recognition model because there are various hand gestures since human hands consist of many articulations and users have different hand sizes and hand movements. In this paper, we applied the Mixture-of-Experts based gesture recognition for various hand gestures of multiple users accurately. The movement of the user's head is used to change the perspection in the virtual environment matching to the movement in the real world, and the gesture of the user's hand can be used as inputs in the virtual environment. A head mounted display (HMD) can be used with the proposed system to make the user absorbed in the virtual environment. In order to evaluate the usefulness of the proposed interface, we developed an interface for the virtual orchestra environment. The experiment verified that the user can use the system easily and intuituvely with being entertained.

  • PDF

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF