• Title/Summary/Keyword: hand tracking

Search Result 349, Processing Time 0.032 seconds

Tracking and Recognizing Hand Gestures using Kalman Filter and Continuous Dynamic Programming (연속DP와 칼만필터를 이용한 손동작의 추적 및 인식)

  • 문인혁;금영광
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.13-16
    • /
    • 2002
  • This paper proposes a method to track hand gesture and to recognize the gesture pattern using Kalman filter and continuous dynamic programming (CDP). The positions of hands are predicted by Kalman filter, and corresponding pixels to the hands are extracted by skin color filter. The center of gravity of the hands is the same as the input pattern vector. The input gesture is then recognized by matching with the reference gesture patterns using CDP. From experimental results to recognize circle shape gesture and intention gestures such as “Come on” and “Bye-bye”, we show the proposed method is feasible to the hand gesture-based human -computer interaction.

  • PDF

Block-based Multiple Cameras Hand-off for Continuous Object Tracking and Surveillance (연속적인 물체 추적과 감시를 위한 Block 기반 다중 카메라들 간의 Hand-off 기술)

  • Kim, Ji-Man;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.419-423
    • /
    • 2007
  • 감시 및 보안의 중요성이 커지고 있다. 따라서 여러 대의 카메라로 움직이는 물체를 연속적으로 추적하는 효율적인 알고리즘 및 시스템에 대한 개발이 활발하다. 본 논문에서는 물체를 연속적으로 추적하기 위해 다중 카메라 간의 hand-off 기술을 제안한다. 먼저 움직이는 물체의 검출을 위한 몇 가지 단계의 전처리 과정을 거친다. 그리고 나서 검출된 영역들 간의 상관관계를 파악하기 위해 물체를 가장 잘 검출 한 주 카메라를 선택하고 이동 경로에 따른 다음 주 카메라를 예측한다. 예측된 카메라 정보와 칼라 정보 등을 이용해서 동일 물체를 추적하고 있음을 확인한다. 실험 결과는 움직이는 특정 물체에 대해 주 카메라가 어떻게 변해 가는지를 보여준다.

  • PDF

Marker Classification by Sensor Fusion for Hand Pose Tracking in HMD Environments using MLP (HMD 환경에서 사용자 손의 자세 추정을 위한 MLP 기반 마커 분류)

  • Vu, Luc Cong;Choi, Eun-Seok;You, Bum-Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.920-922
    • /
    • 2018
  • This paper describes a method to classify simple circular artificial markers on surfaces of a box on the back of hand to detect the pose of user's hand for VR/AR applications by using a Leap Motion camera and two IMU sensors. One IMU sensor is located in the box and the other IMU sensor is fixed with the camera. Multi-layer Perceptron (MLP) algorithm is adopted to classify artificial markers on each surface tracked by the camera using IMU sensor data. It is experimented successfully in real-time, 70Hz, under PC environments.

A Computer Vision Approach for Identifying Acupuncture Points on the Face and Hand Using the MediaPipe Framework (MediaPipe Framework를 이용한 얼굴과 손의 경혈 판별을 위한 Computer Vision 접근법)

  • Hadi S. Malekroodi;Myunggi Yi;Byeong-il Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.563-565
    • /
    • 2023
  • Acupuncture and acupressure apply needles or pressure to anatomical points for therapeutic benefit. The over 350 mapped acupuncture points in the human body can each treat various conditions, but anatomical variations make precisely locating these acupoints difficult. We propose a computer vision technique using the real-time hand and face tracking capabilities of the MediaPipe framework to identify acupoint locations. Our model detects anatomical facial and hand landmarks, and then maps these to corresponding acupoint regions. In summary, our proposed model facilitates precise acupoint localization for self-treatment and enhances practitioners' abilities to deliver targeted acupuncture and acupressure therapies.

A Study on Hand Gesture Recognition with Low-Resolution Hand Images (저해상도 손 제스처 영상 인식에 대한 연구)

  • Ahn, Jung-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.57-64
    • /
    • 2014
  • Recently, many human-friendly communication methods have been studied for human-machine interface(HMI) without using any physical devices. One of them is the vision-based gesture recognition that this paper deals with. In this paper, we define some gestures for interaction with objects in a predefined virtual world, and propose an efficient method to recognize them. For preprocessing, we detect and track the both hands, and extract their silhouettes from the low-resolution hand images captured by a webcam. We modeled skin color by two Gaussian distributions in RGB color space and use blob-matching method to detect and track the hands. Applying the foodfill algorithm we extracted hand silhouettes and recognize the hand shapes of Thumb-Up, Palm and Cross by detecting and analyzing their modes. Then, with analyzing the context of hand movement, we recognized five predefined one-hand or both-hand gestures. Assuming that one main user shows up for accurate hand detection, the proposed gesture recognition method has been proved its efficiency and accuracy in many real-time demos.

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

RealBook: A Tangible Electronic Book Based on the Interface of TouchFace-V (RealBook: TouchFace-V 인터페이스 기반 실감형 전자책)

  • Song, Dae-Hyeon;Bae, Ki-Tae;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.551-559
    • /
    • 2013
  • In this paper, we proposed a tangible RealBook based on the interface of TouchFace-V which is able to recognize multi-touch and hand gesture. The TouchFace-V is applied projection technology on a flat surface such as table, without constraint of space. The system's configuration is addressed installation, calibration, and portability issues that are most existing front-projected vision-based tabletop display. It can provide hand touch and gesture applying computer vision by adopting tracking technology without sensor and traditional input device. The RealBook deals with the combination of each advantage of analog sensibility on texts and multimedia effects of e-book. Also, it provides digitally created stories that would differ in experiences and environments with interacting users' choices on the interface of the book. We proposed e-book that is new concept of electronic book; named RealBook, different from existing and TouchFace-V interface, which can provide more direct viewing, natural and intuitive interactions with hand touch and gesture.

A Real-time Interactive Shadow Avatar with Facial Emotions (감정 표현이 가능한 실시간 반응형 그림자 아바타)

  • Lim, Yang-Mi;Lee, Jae-Won;Hong, Euy-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.4
    • /
    • pp.506-515
    • /
    • 2007
  • In this paper, we propose a Real-time Interactive Shadow Avatar(RISA) which can express facial emotions changing as response of user's gestures. The avatar's shape is a virtual Shadow constructed from the real-time sampled picture of user's shape. Several predefined facial animations overlap on the face area of the virtual Shadow, according to the types of hand gestures. We use the background subtraction method to separate the virtual Shadow, and a simplified region-based tracking method is adopted for tracking hand positions and detecting hand gestures. In order to express smooth change of emotions, we use a refined morphing method which uses many more frames in contrast with traditional dynamic emoticons. RISA can be directly applied to the area of interface media arts and we expect the detecting scheme of RISA would be utilized as an alternative media interface for DMB and camera phones which need simple input devices, in the near future.

  • PDF

Depth Image based Chinese Learning Machine System Using Adjusted Chain Code (깊이 영상 기반 적응적 체인 코드를 이용한 한자 학습 시스템)

  • Kim, Kisang;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.12
    • /
    • pp.545-554
    • /
    • 2014
  • In this paper, we propose online Chinese character learning machine with a depth camera, where a system presents a Chinese character on a screen and a user is supposed to draw the presented Chinese character by his or her hand gesture. We develop the hand tracking method and suggest the adjusted chain code to represent constituent strokes of a Chinese character. For hand tracking, a fingertip is detected and verified. The adjusted chain code is designed to contain the information on order and relative length of each constituent stroke as well as the information on the directional variation of sample points. Such information is very efficient for a real-time match process and checking incorrectly drawn parts of a stroke.