• Title/Summary/Keyword: Hand Gesture

Search Result 405, Processing Time 0.023 seconds

Design of Image Extraction Hardware for Hand Gesture Vision Recognition

  • Lee, Chang-Yong;Kwon, So-Young;Kim, Young-Hyung;Lee, Yong-Hwan
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.71-83
    • /
    • 2020
  • In this paper, we propose a system that can detect the shape of a hand at high speed using an FPGA. The hand-shape detection system is designed using Verilog HDL, a hardware language that can process in parallel instead of sequentially running C++ because real-time processing is important. There are several methods for hand gesture recognition, but the image processing method is used. Since the human eye is sensitive to brightness, the YCbCr color model was selected among various color expression methods to obtain a result that is less affected by lighting. For the CbCr elements, only the components corresponding to the skin color are filtered out from the input image by utilizing the restriction conditions. In order to increase the speed of object recognition, a median filter that removes noise present in the input image is used, and this filter is designed to allow comparison of values and extraction of intermediate values at the same time to reduce the amount of computation. For parallel processing, it is designed to locate the centerline of the hand during scanning and sorting the stored data. The line with the highest count is selected as the center line of the hand, and the size of the hand is determined based on the count, and the hand and arm parts are separated. The designed hardware circuit satisfied the target operating frequency and the number of gates.

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

A Controlled Study of Interactive Exhibit based on Gesture Image Recognition (제스처 영상 인식기반의 인터렉티브 전시용 제어기술 연구)

  • Cha, Jaesang;Kang, Joonsang;Rho, Jung-Kyu;Choi, Jungwon;Koo, Eunja
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.1
    • /
    • pp.1-5
    • /
    • 2014
  • Recently, building is rapidly develop more intelligently because of the development of industries. And people seek such as comfort, efficiency, and convenience in office environment and the living environment. Also, people were able to use a variety of devices. Smart TV and smart phones were distributed widely so interaction between devices and human has been increase the interest. A various method study for interaction but there are some discomfort and limitations using controller for interaction. In this paper, a user could be easily interaction and control LED through using Kinect and gesture(hand gestures) without controller. we designed interface which is control LED using the joint information of gesture obtained from Kinect. A user could be individually controlled LED through gestures (hand movements) using the implementation of the interface. We expected developed interface would be useful in LED control and various fields.

Gesture Recognition Using Stereo Tracking Initiator and HMM for Tele-Operation (스테레오 영상 추적 자동초기화와 HMM을 이용한 원격 작업용 제스처 인식)

  • Jeong, Ji-Won;Lee, Yong-Beom;Jin, Seong-Il
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2262-2270
    • /
    • 1999
  • In this paper, we describe gesture recognition algorithm using computer vision sensor and HMM. The automatic hand region extraction has been proposed for initializing the tracking of the tele-operation gestures. For this, distance informations(disparity map) as results of stereo matching of initial left and right images are employed to isolate the hand region from a scene. PDOE(positive difference of edges) feature images adapted here have been found to be robust against noise and background brightness. The KNU/KAERI(K/K) gesture instruction set is defined for tele-operation in atomic electric power stations. The composite recognition model constructed by concatenating three gesture instruction models including pre-orders, basic orders, and post-orders has been proposed and identified by discrete HMM. Our experimental results showed that consecutive orders composed of more than two ones are correctly recognized at the rate of above 97%.

  • PDF

Study on EMI Elimination and PLN Application in ELF Band for Romote Sensing with Electric Potentiometer (전위계차 센서를 이용한 원격센싱을 위한 ELF 대역 EMI 제거 및 PLN 응용 연구)

  • Jang, Jin Soo;Kim, Young Chul
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.33-38
    • /
    • 2015
  • In this paper, we propose the methods not only to eliminate ELF(Extremely Low Frequency) EMI(Electro-Magnetic Interference) noice for extending recognition distance, but also to utilize the the PLN for detecting starting instance of a hand gesture using electric potential sensor. First, we measure strength of electric field generated in the smart devices such as TV and phone, and minimize EMI through efficient arrangement of the sensors. Meanwhile, we utilize the 60 Hz PLN to extract the starting point of hand gesture. Thereafter, we eliminate the PLN generated in the smart device and circuit of sensors. And then, we shield the sensors from an electric noise generated from devices. Finally, through analyzing the frequency components according to the gesture of target, we use the low pass filter and the Kalman filter for elimination of remaining electric noise. We analyze and evaluate the proposed ELF-band EMI eliminating method for non-contact remote sensing of the EPS(Electric Potential Sensor). Combined with a detecting technique of gesture starting point, the recognition distance for gestures has been proven to be extended to more than 3m, which is critical for real application.

A Study on Gesture Interface through User Experience (사용자 경험을 통한 제스처 인터페이스에 관한 연구)

  • Yoon, Ki Tae;Cho, Eel Hea;Lee, Jooyoup
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.839-849
    • /
    • 2017
  • Recently, the role of the kitchen has evolved from the space for previous survival to the space that shows the present life and culture. Along with these changes, the use of IoT technology is spreading. As a result, the development and diffusion of new smart devices in the kitchen is being achieved. The user experience for using these smart devices is also becoming important. For a natural interaction between a user and a computer, better interactions can be expected based on context awareness. This paper examines the Natural User Interface (NUI) that does not touch the device based on the user interface (UI) of the smart device used in the kitchen. In this method, we use the image processing technology to recognize the user's hand gesture using the camera attached to the device and apply the recognized hand shape to the interface. The gestures used in this study are proposed to gesture according to the user's context and situation, and 5 kinds of gestures are classified and used in the interface.

Implementation of a Gesture Recognition Signage Platform for Factory Work Environments

  • Rho, Jungkyu
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.171-176
    • /
    • 2020
  • This paper presents an implementation of a gesture recognition platform that can be used in a factory workplaces. The platform consists of signages that display worker's job orders and a control center that is used to manage work orders for factory workers. Each worker does not need to bring work order documents and can browse the assigned work orders on the signage at his/her workplace. The contents of signage can be controlled by worker's hand and arm gestures. Gestures are extracted from body movement tracked by 3D depth camera and converted to the commandsthat control displayed content of the signage. Using the control center, the factory manager can assign tasks to each worker, upload work order documents to the system, and see each worker's progress. The implementation has been applied experimentally to a machining factory workplace. This flatform provides convenience for factory workers when they are working at workplaces, improves security of techincal documents, but can also be used to build smart factories.

Study about Windows System Control Using Gesture and Speech Recognition (제스처 및 음성 인식을 이용한 윈도우 시스템 제어에 관한 연구)

  • 김주홍;진성일이남호이용범
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1289-1292
    • /
    • 1998
  • HCI(human computer interface) technologies have been often implemented using mouse, keyboard and joystick. Because mouse and keyboard are used only in limited situation, More natural HCI methods such as speech based method and gesture based method recently attract wide attention. In this paper, we present multi-modal input system to control Windows system for practical use of multi-media computer. Our multi-modal input system consists of three parts. First one is virtual-hand mouse part. This part is to replace mouse control with a set of gestures. Second one is Windows control system using speech recognition. Third one is Windows control system using gesture recognition. We introduce neural network and HMM methods to recognize speeches and gestures. The results of three parts interface directly to CPU and through Windows.

  • PDF

Dual Autostereoscopic Display Platform for Multi-user Collaboration with Natural Interaction

  • Kim, Hye-Mi;Lee, Gun-A.;Yang, Ung-Yeon;Kwak, Tae-Jin;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.34 no.3
    • /
    • pp.466-469
    • /
    • 2012
  • In this letter, we propose a dual autostereoscopic display platform employing a natural interaction method, which will be useful for sharing visual data with users. To provide 3D visualization of a model to users who collaborate with each other, a beamsplitter is used with a pair of autostereoscopic displays, providing a visual illusion of a floating 3D image. To interact with the virtual object, we track the user's hands with a depth camera. The gesture recognition technique we use operates without any initialization process, such as specific poses or gestures, and supports several commands to control virtual objects by gesture recognition. Experiment results show that our system performs well in visualizing 3D models in real-time and handling them under unconstrained conditions, such as complicated backgrounds or a user wearing short sleeves.

Effect of Input Data Video Interval and Input Data Image Similarity on Learning Accuracy in 3D-CNN

  • Kim, Heeil;Chung, Yeongjee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.208-217
    • /
    • 2021
  • 3D-CNN is one of the deep learning techniques for learning time series data. However, these three-dimensional learning can generate many parameters, requiring high performance or having a significant impact on learning speed. We will use these 3D-CNNs to learn hand gesture and find the parameters that showed the highest accuracy, and then analyze how the accuracy of 3D-CNN varies through input data changes without any structural changes in 3D-CNN. First, choose the interval of the input data. This adjusts the ratio of the stop interval to the gesture interval. Secondly, the corresponding interframe mean value is obtained by measuring and normalizing the similarity of images through interclass 2D cross correlation analysis. This experiment demonstrates that changes in input data affect learning accuracy without structural changes in 3D-CNN. In this paper, we proposed two methods for changing input data. Experimental results show that input data can affect the accuracy of the model.