• 제목/요약/키워드: Human Gesture Recognition

Search Result 198, Processing Time 0.024 seconds

A Comparison Study on Related Work for Improving the Performance of Hand Gesture Recognition on Kinect Devices (키넥트의 손동작인식성능 개선방안 관련연구 분석)

  • Park, So-Hyun;Park, Eun-Young;Park, Young-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.918-921
    • /
    • 2015
  • 최근, 기계와 사람이 상호작용을 하는 HCI(Human-Computer Interaction) 기술이 중요해지고 있다. 그 중에서도 연구가들 사이에서 신체의 골격을 인식하는 동작인식 카메라인 키넥트를 활용한 연구들이 급증하고 있다. 키넥트를 사람과 배경의 깊이를 인식 및 분석한 후 사람인지를 인지한다. 하지만 사람과 배경의 깊이 단계가 같을 경우 사람을 인식하기 힘들다는 한계점이 있다. 본 논문에서는, 이와 같은 한계점을 해결하기 위한 관련 논문을 비교, 분석하고자 한다.

The Recognition of a Human Arm Gesture for a Game Interface (게임 인터페이스를 위한 사람 팔 제스처 인식 시스템)

  • Yeo, DongHyeon;Kim, KyungHan;Kim, HyunJung;Won, IlYong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1513-1516
    • /
    • 2013
  • 본 연구는 최근 개발된 다양한 저비용 센서와 기계 학습 알고리즘을 이용한 게임을 위한 사람 팔 제스처 인식에 관한 것이다. 게임의 입력으로 사용할 수 있는 동작 10개를 정의하고, 이러한 동작들을 센서에서 수집된 팔 관절의 좌표를 추적하여 전처리했다. 자료의 시간성을 고려하여 HMM(Hidden Markov Model)을 학습 알고리즘으로 사용하였으며 제안한 방법의 유용성은 실험을 통해 검증했다.

Robust Hand Region Extraction Using a Joint-based Model (관절 기반의 모델을 활용한 강인한 손 영역 추출)

  • Jang, Seok-Woo;Kim, Sul-Ho;Kim, Gye-Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.525-531
    • /
    • 2019
  • Efforts to utilize human gestures to effectively implement a more natural and interactive interface between humans and computers have been ongoing in recent years. In this paper, we propose a new algorithm that accepts consecutive three-dimensional (3D) depth images, defines a hand model, and robustly extracts the human hand region based on six palm joints and 15 finger joints. Then, the 3D depth images are adaptively binarized to exclude non-interest areas, such as the background, and accurately extracts only the hand of the person, which is the area of interest. Experimental results show that the presented algorithm detects only the human hand region 2.4% more accurately than the existing method. The hand region extraction algorithm proposed in this paper is expected to be useful in various practical applications related to computer vision and image processing, such as gesture recognition, virtual reality implementation, 3D motion games, and sign recognition.

Implementation of Multi-touch Tabletop Display for Human Computer Interaction (HCI 를 위한 멀티터치 테이블-탑 디스플레이 시스템 구현)

  • Kim, Song-Gook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.553-560
    • /
    • 2007
  • 본 논문에서는 양손의 터치를 인식하여 실시간 상호작용이 가능한 테이블 탑 디스플레이 시스템 및 구현 알고리즘에 대해 기술한다. 제안하는 시스템은 FTIR(Frustrated Total Internal Reflection) 메커니즘을 기반으로 제작되었으며 multi-touch, multi-user 방식의 손 제스처 입력이 가능하다. 시스템은 크게 영상 투영을 위한 빔-프로젝터, 적외선 LED를 부착한 아크릴 스크린, Diffuser 그리고 영상을 획득하기 위한 적외선 카메라로 구성되어 있다. 시스템 제어에 필요한 제스처 명령어 종류는 상호작용 테이블에서의 입력과 출력의 자유도를 분석하고 편리함, 의사소통, 항상성, 완벽함의 정도를 고려하여 규정하였다. 규정된 제스처는 사용자가 상호작용을 위해 스크린에 접촉한 손가락의 개수, 위치, 그리고 움직임 변화를 기준으로 세분화된다. 적외선 카메라를 통해 입력받은 영상은 잡음제거 및 손가락 영역 탐색을 위해 간단한 모폴로지 기법이 적용된 후 인식과정에 들어간다. 인식 과정에서는 입력 받은 제스처 명령어들을 미리 정의해놓은 손 제스처 모델과 비교하여 인식을 행한다. 세부적으로는 먼저 스크린에 접촉된 손가락의 개수를 파악하고 그 영역을 결정하며 그 후 그 영역들의 중심점을 추출하여 그들의 각도 및 유클리디언 거리를 계산한다. 그리고 나서 멀티터치 포인트의 위치 변화값을 미리 정의해둔 모델의 정보와 비교를 한다. 본 논문에서 제안하는 시스템의 효율성은 Google-earth를 제어하는 것을 통해 입증될 수 있다.

  • PDF

Development of Web-cam Game using Hand and Face Skin Color (손과 얼굴의 피부색을 이용한 웹캠 게임 개발)

  • Oh, Chi-Min;Aurrahman, Dhi;Islam, Md. Zahidul;Kim, Hyung-Gwan;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.60-63
    • /
    • 2008
  • The sony Eytoy is developed on Playstation 2 using webcam for detecting human. A user see his appearance in television and become real gamer in the game. It is very different interface compared with ordinary video game which uses joystick. Although Eyetoy already was made for commercial products but the interface method still is interesting and can be added with many techniques like gesture recognition. In this paper, we have developed game interface with image processing for human hand and face detection and with game graphic module. And we realize one example game for busting balloons and demonstrated the game interface abilities. We will open this project for other developers and will be developed very much.

  • PDF

Emotional Interface Technologies for Service Robot (서비스 로봇을 위한 감성인터페이스 기술)

  • Yang, Hyun-Seung;Seo, Yong-Ho;Jeong, Il-Woong;Han, Tae-Woo;Rho, Dong-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.58-65
    • /
    • 2006
  • The emotional interface is essential technology for the robot to provide the proper service to the user. In this research, we developed emotional components for the service robot such as a neural network based facial expression recognizer, emotion expression technologies based on 3D graphical face expression and joints movements, considering a user's reaction, behavior selection technology for emotion expression. We used our humanoid robots, AMI and AMIET as the test-beds of our emotional interface. We researched on the emotional interaction between a service robot and a user by integrating the developed technologies. Emotional interface technology for the service robot, enhance the performance of friendly interaction to the service robot, to increase the diversity of the service and the value-added of the robot for human. and it elevates the market growth and also contribute to the popularization of the robot. The emotional interface technology can enhance the performance of friendly interaction of the service robot. This technology can also increase the diversity of the service and the value-added of the robot for human. and it can elevate the market growth and also contribute to the popularization of the robot.

  • PDF

Smart Wrist Band Considering Wrist Skin Curvature Variation for Real-Time Hand Gesture Recognition (실시간 손 제스처 인식을 위하여 손목 피부 표면의 높낮이 변화를 고려한 스마트 손목 밴드)

  • Yun Kang;Joono Cheong
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.18-28
    • /
    • 2023
  • This study introduces a smart wrist band system with pressure measurements using wrist skin curvature variation due to finger motion. It is easy to wear and take off without pre-adaptation or surgery to use. By analyzing the depth variation of wrist skin curvature during each finger motion, we elaborated the most suitable location of each Force Sensitive Resistor (FSR) to be attached in the wristband with anatomical consideration. A 3D depth camera was used to investigate distinctive wrist locations, responsible for the anatomically de-coupled thumb, index, and middle finger, where the variations of wrist skin curvature appear independently. Then sensors within the wristband were attached correspondingly to measure the pressure change of those points and eventually the finger motion. The smart wrist band was validated for its practicality through two demonstrative applications, i.e., one for a real-time control of prosthetic robot hands and the other for natural human-computer interfacing. And hopefully other futuristic human-related applications would be benefited from the proposed smart wrist band system.

An Experimental Research on the Usability of Indirect Control using Finger Gesture Interaction in Three Dimensional Space (3차원 공간에서 손가락 제스쳐 인터랙션을 이용한 간접제어의 사용성에 관한 실험연구)

  • Ham, Kyung Sun;Lee, Dahye;Hong, Hee Jung;Park, Sungjae;Kim, Jinwoo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.519-532
    • /
    • 2014
  • The emerging technologies for the natural computer interaction can give manufacturers new opportunities of product innovation. This paper is the study on a method of human communication about a finger gestures interaction. As technological advance has been so rapid over the last few decades, the utilizing products or services will be soon popular. The purpose of this experiment are as follows; What is the usefulness of gesture interaction? What is the cognitive impact on gesture interaction users. The finger gestures interaction consist of poking, picking and grasping. By measuring each usability in 2D and 3D space, this study shows the effect of finger gestures interaction. The 2D and 3D experimental tool is developed by using LeapMotion technology. As a results, the experiments involved 48 subjects shows that there is no difference in usability between the gestures in 2D space but in 3D space, the meaningful difference has been found. In addition, all gestures express good usability in 2D space rather than 3D space. Especially, there are the attractive interest that using uni-finger is better than multi-fingers.

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.

Detection Accuracy Improvement of Hang Region using Kinect (키넥트를 이용한 손 영역 검출의 정확도 개선)

  • Kim, Heeae;Lee, Chang Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2727-2732
    • /
    • 2014
  • Recently, the researches of object tracking and recognition using Microsoft's Kinect are being actively studied. In this environment human hand detection and tracking is the most basic technique for human computer interaction. This paper proposes a method of improving the accuracy of the detected hand region's boundary in the cluttered background. To do this, we combine the hand detection results using the skin color with the extracted depth image from Kinect. From the experimental results, we show that the proposed method increase the accuracy of the hand region detection than the method of detecting a hand region with a depth image only. If the proposed method is applied to the sign language or gesture recognition system it is expected to contribute much to accuracy improvement.