• 제목/요약/키워드: hand gesture

검색결과 401건 처리시간 0.03초

HandButton: Gesture Recognition of Transceiver-free Object by Using Wireless Networks

  • Zhang, Dian;Zheng, Weiling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권2호
    • /
    • pp.787-806
    • /
    • 2016
  • Traditional radio-based gesture recognition approaches usually require the target to carry a device (e.g., an EMG sensor or an accelerometer sensor). However, such requirement cannot be satisfied in many applications. For example, in smart home, users want to control the light on/off by some specific hand gesture, without finding and pressing the button especially in dark area. They will not carry any device in this scenario. To overcome this drawback, in this paper, we propose three algorithms able to recognize the target gesture (mainly the human hand gesture) without carrying any device, based on just Radio Signal Strength Indicator (RSSI). Our platform utilizes only 6 telosB sensor nodes with a very easy deployment. Experiment results show that the successful recognition radio can reach around 80% in our system.

손 제스처 기반의 애완용 로봇 제어 (Hand gesture based a pet robot control)

  • 박세현;김태의;권경수
    • 한국산업정보학회논문지
    • /
    • 제13권4호
    • /
    • pp.145-154
    • /
    • 2008
  • 본 논문에서는 애완용 로봇에 장착된 카메라로부터 획득된 연속 영상에서 사용자의 손 제스처를 인식하여 로봇을 제어하는 시스템을 제안한다. 제안된 시스템은 손 검출, 특징 추출, 제스처 인식 로봇 제어의 4단계로 구성된다. 먼저 카메라로부터 입력된 영상에서 HSI 색상공간에 정의된 피부색 모델과 연결성분 분석을 이용하여 손 영역을 검출한다. 다음은 연속 영상에서 손 영역의 모양과 움직임에 따른 특징을 추출한다. 이때 의미 있는 제스처의 구분을 위해 손의 모양을 고려한다. 그 후에 손의 움직임에 의해 양자화된 심볼들을 입력으로 하는 은닉 마르코프 모델을 이용하여 손 제스처는 인식된다. 마지막으로 인식된 제스처에 대응하는 명령에 따라 애완용 로봇이 동작하게 된다. 애완용 로봇을 제어하기 위한 명령으로 앉아, 일어서, 엎드려, 악수 등의 제스처를 정의하였다. 실험결과로 제안한 시스템을 이용하여 사용자가 제스처로 애완용 로봇을 제어 할 수 있음을 보였다.

  • PDF

Proposal of Camera Gesture Recognition System Using Motion Recognition Algorithm

  • Moon, Yu-Sung;Kim, Jung-Won
    • 전기전자학회논문지
    • /
    • 제26권1호
    • /
    • pp.133-136
    • /
    • 2022
  • This paper is about motion gesture recognition system, and proposes the following improvement to the flaws of the current system: a motion gesture recognition system and such algorithm that uses the video image of the entire hand and reading its motion gesture to advance the accuracy of recognition. The motion gesture recognition system includes, an image capturing unit that captures and obtains the images of the area applicable for gesture reading, a motion extraction unit that extracts the motion area of the image, and a hand gesture recognition unit that read the motion gestures of the extracted area. The proposed application of the motion gesture algorithm achieves 20% improvement compared to that of the current system.

손 제스처 인식을 통한 인체 아바타의 지능적 자율 이동에 관한 연구 (Study on Intelligent Autonomous Navigation of Avatar using Hand Gesture Recognition)

  • 김종성;박광현;김정배;도준형;송경준;민병의;변증남
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 1999년도 추계종합학술대회 논문집
    • /
    • pp.483-486
    • /
    • 1999
  • In this paper, we present a real-time hand gesture recognition system that controls motion of a human avatar based on the pre-defined dynamic hand gesture commands in a virtual environment. Each motion of a human avatar consists of some elementary motions which are produced by solving inverse kinematics to target posture and interpolating joint angles for human-like motions. To overcome processing time of the recognition system for teaming, we use a Fuzzy Min-Max Neural Network (FMMNN) for classification of hand postures

  • PDF

딥러닝 기반 손 제스처 인식을 통한 3D 가상현실 게임 (3D Virtual Reality Game with Deep Learning-based Hand Gesture Recognition)

  • 이병희;오동한;김태영
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제24권5호
    • /
    • pp.41-48
    • /
    • 2018
  • 가상 환경에서 몰입감을 높이고 자유로운 상호작용을 제공하기 위한 가장 자연스러운 방법은 사용자의 손을 이용한 제스처 인터페이스를 제공하는 것이다. 그러나 손 제스처 인식에 관한 기존의 연구들은 특화된 센서나 장비를 요구하거나 낮은 인식률을 보이는 단점이 있다. 본 논문은 손 제스처 입력을 위한 RGB 카메라 이외 별도 센서나 장비 없이 손 제스처 인식이 가능한 3차원 DenseNet 합성곱 신경망 모델을 제안하고 이를 기반으로 한 가상현실 게임을 소개한다. 4개의 정적 손 제스처와 6개의 동적 손 제스처 인터페이스에 대해 실험한 결과 평균 50ms의 속도로 94.2%의 인식률을 보여 가상현실 게임의 실시간 사용자 인터페이스로 사용 가능함을 알 수 있었다. 본 연구의 결과는 게임 뿐 아니라 교육, 의료, 쇼핑 등 다양한 분야에서 손 제스처 인터페이스로 활용될 수 있다.

이중흐름 3차원 합성곱 신경망 구조를 이용한 효율적인 손 제스처 인식 방법 (An Efficient Hand Gesture Recognition Method using Two-Stream 3D Convolutional Neural Network Structure)

  • 최현종;노대철;김태영
    • 한국차세대컴퓨팅학회논문지
    • /
    • 제14권6호
    • /
    • pp.66-74
    • /
    • 2018
  • 최근 가상환경에서 몰입감을 늘리고 자유로운 상호작용을 제공하기 위한 손 제스처 인식에 대한 연구가 활발히 진행되고 있다. 그러나 기존의 연구는 특화된 센서나 장비를 요구하거나, 낮은 인식률을 보이고 있다. 본 논문은 정적 손 제스처와 동적 손 제스처 인식을 위해 카메라 이외의 별도의 센서나 장비 없이 딥러닝 기술을 사용한 손 제스처 인식 방법을 제안한다. 일련의 손 제스처 영상을 고주파 영상으로 변환한 후 손 제스처 RGB 영상들과 이에 대한 고주파 영상들 각각에 대해 덴스넷 3차원 합성곱 신경망을 통해 학습한다. 6개의 정적 손 제스처와 9개의 동적 손 제스처 인터페이스에 대해 실험한 결과 기존 덴스넷에 비해 4.6%의 성능이 향상된 평균 92.6%의 인식률을 보였다. 본 연구결과를 검증하기 위하여 3D 디펜스 게임을 구현한 결과 평균 34ms로 제스처 인식이 가능하여 가상현실 응용의 실시간 사용자 인터페이스로 사용가능함을 알 수 있었다.

강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식 (A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction)

  • 이래경;안수용;오세영
    • 제어로봇시스템학회논문지
    • /
    • 제18권4호
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

A Measurement System for 3D Hand-Drawn Gesture with a PHANToMTM Device

  • Ko, Seong-Young;Bang, Won-Chul;Kim, Sang-Youn
    • Journal of Information Processing Systems
    • /
    • 제6권3호
    • /
    • pp.347-358
    • /
    • 2010
  • This paper presents a measurement system for 3D hand-drawn gesture motion. Many pen-type input devices with Inertial Measurement Units (IMU) have been developed to estimate 3D hand-drawn gesture using the measured acceleration and/or the angular velocity of the device. The crucial procedure in developing these devices is to measure and to analyze their motion or trajectory. In order to verify the trajectory estimated by an IMU-based input device, it is necessary to compare the estimated trajectory to the real trajectory. For measuring the real trajectory of the pen-type device, a PHANToMTM haptic device is utilized because it allows us to measure the 3D motion of the object in real-time. Even though the PHANToMTM measures the position of the hand gesture well, poor initialization may produce a large amount of error. Therefore, this paper proposes a calibration method which can minimize measurement errors.

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.