• Title/Summary/Keyword: hand gesture recognition

Search Result 311, Processing Time 0.041 seconds

Study on Signal Processing Method for Extracting Hand-Gesture Signals Using Sensors Measuring Surrounding Electric Field Disturbance (주변 전기장 측정센서를 이용한 손동작 신호 검출을 위한 신호처리시스템 연구)

  • Cheon, Woo Young;Kim, Young Chul
    • Smart Media Journal
    • /
    • v.6 no.2
    • /
    • pp.26-32
    • /
    • 2017
  • In this paper, we implement a signal-detecting electric circuit based LED lighting control system which is essential in NUI technology using EPIC converting surrounding earth electric field disturbance signals to electric potential signals. We used signal-detecting electric circuits which was developed to extract individual signal for each EPIC sensor while conventional EPIC-based development equipments provide limited forms of signals. The signals extracted from our developed circuit contributed to better performance as well as flexiblity in processes of feature extracting stage and pattern recognition stage. We designed a system which can control the brightness and on/off of LED lights with four hand gestures in order to justify its applicability to real application systems. We obtained faster pattern classification speed not only by developing an instruction system, but also by using interface control signals.

NUI/NUX framework based on intuitive hand motion (직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크)

  • Lee, Gwanghyung;Shin, Dongkyoo;Shin, Dongil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.11-19
    • /
    • 2014
  • The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

Estimation of Critical Threshold for Rejection in HMM Based Recognition Systems (HMM 기반의 인식시스템에서의 거절기능 수행을 위한 임계 문턱값 추정)

  • 김인철;진성일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.2
    • /
    • pp.90-94
    • /
    • 2000
  • In this paper, we propose an efficient method of estimating a critical threshold which is used to reject unreliable patterns in a HMM based recognition system. The rejection methods based on the anti-models which are formulated as the statistical hypothesis determine whether or not to accept an input pattern by comparing the likelihood ratio of HMM and anti-models to a critical threshold. It is quite difficult to fix a threshold for the probability of a HMM because the range of such probabilities varies severely depending on the chosen class model. We estimate the critical threshold, which is very class-dependent, using the likelihood scores for the training database. In our experiments, we applied the proposed estimating method of the threshold to the HMM based 3D hand gesture recognition system. We found that this method can be used successfully for rejecting unreliable input gestures regardless of the types of anti-models.

  • PDF

Hand Gesture Recognition Result Using Dynamic Training (동적 학습을 이용한 손동작 인식 결과)

  • Jeoung, You-Sun;Park, Dong-Suk;Youn, Young-Ji;Shin, Bo-Kyoung;Kim, Hye-Min;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.861-864
    • /
    • 2007
  • 본 논문에서는 카메라-투영 시스템에서 비전에 기반을 둔 팔동작 인식을 위한 새로운 알고리즘을 제안하고 있다. 제안된 인식방법은 정적인 팔동작 분류를 위하여 푸리에 변환을 사용하였다. 팔 분할은 개선된 배경 제거 방법을 사용하였다. 대부분의 인식방법들이 같은 피검자에 의해 학습과 실험이 이루어지고 상호작용 이전에 학습단계가 필요하다. 그러나 학습되지 않은 다양한 상황에 대해서도 상호작용을 위해 동작 인식이 요구된다. 그러므로 본 논문에서는 인식 작업 중에 검출된 불완전한 동작들을 정정하여 적용하였다. 그 결과 사용자와 독립되게 동작을 인식함으로써 새로운 사용자에게 신속하게 온라인 적용이 가능하였다.

  • PDF

Implementation of Multi-touch Tabletop Display for Human Computer Interaction (HCI 를 위한 멀티터치 테이블-탑 디스플레이 시스템 구현)

  • Kim, Song-Gook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.553-560
    • /
    • 2007
  • 본 논문에서는 양손의 터치를 인식하여 실시간 상호작용이 가능한 테이블 탑 디스플레이 시스템 및 구현 알고리즘에 대해 기술한다. 제안하는 시스템은 FTIR(Frustrated Total Internal Reflection) 메커니즘을 기반으로 제작되었으며 multi-touch, multi-user 방식의 손 제스처 입력이 가능하다. 시스템은 크게 영상 투영을 위한 빔-프로젝터, 적외선 LED를 부착한 아크릴 스크린, Diffuser 그리고 영상을 획득하기 위한 적외선 카메라로 구성되어 있다. 시스템 제어에 필요한 제스처 명령어 종류는 상호작용 테이블에서의 입력과 출력의 자유도를 분석하고 편리함, 의사소통, 항상성, 완벽함의 정도를 고려하여 규정하였다. 규정된 제스처는 사용자가 상호작용을 위해 스크린에 접촉한 손가락의 개수, 위치, 그리고 움직임 변화를 기준으로 세분화된다. 적외선 카메라를 통해 입력받은 영상은 잡음제거 및 손가락 영역 탐색을 위해 간단한 모폴로지 기법이 적용된 후 인식과정에 들어간다. 인식 과정에서는 입력 받은 제스처 명령어들을 미리 정의해놓은 손 제스처 모델과 비교하여 인식을 행한다. 세부적으로는 먼저 스크린에 접촉된 손가락의 개수를 파악하고 그 영역을 결정하며 그 후 그 영역들의 중심점을 추출하여 그들의 각도 및 유클리디언 거리를 계산한다. 그리고 나서 멀티터치 포인트의 위치 변화값을 미리 정의해둔 모델의 정보와 비교를 한다. 본 논문에서 제안하는 시스템의 효율성은 Google-earth를 제어하는 것을 통해 입증될 수 있다.

  • PDF

An ANN-based gesture recognition algorithm for smart-home applications

  • Huu, Phat Nguyen;Minh, Quang Tran;The, Hoang Lai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.1967-1983
    • /
    • 2020
  • The goal of this paper is to analyze and build an algorithm to recognize hand gestures applying to smart home applications. The proposed algorithm uses image processing techniques combing with artificial neural network (ANN) approaches to help users interact with computers by common gestures. We use five types of gestures, namely those for Stop, Forward, Backward, Turn Left, and Turn Right. Users will control devices through a camera connected to computers. The algorithm will analyze gestures and take actions to perform appropriate action according to users requests via their gestures. The results show that the average accuracy of proposal algorithm is 92.6 percent for images and more than 91 percent for video, which both satisfy performance requirements for real-world application, specifically for smart home services. The processing time is approximately 0.098 second with 10 frames/sec datasets. However, accuracy rate still depends on the number of training images (video) and their resolution.

Hand Gesture Recognition Method based on the MCSVM for Interaction with 3D Objects in Virtual Reality (가상현실 3D 오브젝트와 상호작용을 위한 MCSVM 기반 손 제스처 인식)

  • Kim, Yoon-Je;Koh, Tack-Kyun;Yoon, Min-Ho;Kim, Tae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.1088-1091
    • /
    • 2017
  • 최근 그래픽스 기반의 가상현실 기술의 발전과 관심이 증가하면서 3D 객체와의 자연스러운 상호작용을 위한 방법들 중 손 제스처 인식에 대한 연구가 활발히 진행되고 있다. 본 논문은 가상현실 3D 오브젝트와의 상호작용을 위한 MCSVM 기반의 손 제스처 인식을 제안한다. 먼저 다양한 손 제스처들을 립모션을 통해 입력 받아 전처리를 수행한 손 데이터를 전달한다. 그 후 이진 결정 트리로 1차 분류를 한 손 데이터를 리샘플링 한 뒤 체인코드를 생성하고 이에 대한 히스토그램으로 특징 데이터를 구성한다. 이를 기반으로 MCSVM 학습을 통해 2차 분류를 수행하여 제스처를 인식한다. 실험 결과 3D 오브젝트와 상호작용을 위한 16개의 명령 제스처에 대해 평균 99.2%의 인식률을 보였고 마우스 인터페이스와 비교한 정서적 평가 결과에서는 마우스 입력에 비하여 직관적이고 사용자 친화적인 상호작용이 가능하다는 점에서 게임, 학습 시뮬레이션, 설계, 의료분야 등 많은 가상현실 응용 분야에서의 입력 인터페이스로 활용 될 수 있고 가상현실에서 몰입도를 높이는데 도움이 됨을 알 수 있었다.

Smart Wrist Band Considering Wrist Skin Curvature Variation for Real-Time Hand Gesture Recognition (실시간 손 제스처 인식을 위하여 손목 피부 표면의 높낮이 변화를 고려한 스마트 손목 밴드)

  • Yun Kang;Joono Cheong
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.18-28
    • /
    • 2023
  • This study introduces a smart wrist band system with pressure measurements using wrist skin curvature variation due to finger motion. It is easy to wear and take off without pre-adaptation or surgery to use. By analyzing the depth variation of wrist skin curvature during each finger motion, we elaborated the most suitable location of each Force Sensitive Resistor (FSR) to be attached in the wristband with anatomical consideration. A 3D depth camera was used to investigate distinctive wrist locations, responsible for the anatomically de-coupled thumb, index, and middle finger, where the variations of wrist skin curvature appear independently. Then sensors within the wristband were attached correspondingly to measure the pressure change of those points and eventually the finger motion. The smart wrist band was validated for its practicality through two demonstrative applications, i.e., one for a real-time control of prosthetic robot hands and the other for natural human-computer interfacing. And hopefully other futuristic human-related applications would be benefited from the proposed smart wrist band system.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

Platform Independent Game Development Using HTML5 Canvas (HTML5 캔버스를 이용한 플랫폼 독립적인 게임의 구현)

  • Jang, Seok-Woo;Huh, Moon-Haeng
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.3042-3048
    • /
    • 2014
  • Recently, HTML5 have drawn many people's attention since it is considered as a next-generation web standard and can implement a lot of graphic and multimedia-related techniques on a web browser without installing programs separately. In this paper, we implement a game independent of platforms, such as iOS and Android, using the HTML5 canvas. In the game, the main character can move up, down, left, and right not to collide with neighboring enemies. If the character collides with an enemy, the HP (hit point) gauge bar reduces. On the other hand, if the character obtains heart items, the gauge bar increases. In the future, we will add various items to the game and will diversify its user interfaces by applying computer vision techniques such as various gesture recognition.