• Title/Summary/Keyword: Gesture-based Interaction

Search Result 152, Processing Time 0.028 seconds

MPEG-U-based Advanced User Interaction Interface Using Hand Posture Recognition

  • Han, Gukhee;Choi, Haechul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.4
    • /
    • pp.267-273
    • /
    • 2016
  • Hand posture recognition is an important technique to enable a natural and familiar interface in the human-computer interaction (HCI) field. This paper introduces a hand posture recognition method using a depth camera. Moreover, the hand posture recognition method is incorporated with the Moving Picture Experts Group Rich Media User Interface (MPEG-U) Advanced User Interaction (AUI) Interface (MPEG-U part 2), which can provide a natural interface on a variety of devices. The proposed method initially detects positions and lengths of all fingers opened, and then recognizes the hand posture from the pose of one or two hands, as well as the number of fingers folded when a user presents a gesture representing a pattern in the AUI data format specified in MPEG-U part 2. The AUI interface represents a user's hand posture in the compliant MPEG-U schema structure. Experimental results demonstrate the performance of the hand posture recognition system and verified that the AUI interface is compatible with the MPEG-U standard.

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.

Hand Gesture based Manipulation of Meeting Data in Teleconference (핸드제스처를 이용한 원격미팅 자료 인터페이스)

  • Song, Je-Hoon;Choi, Ki-Ho;Kim, Jong-Won;Lee, Yong-Gu
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.126-136
    • /
    • 2007
  • Teleconferences have been used in business sectors to reduce traveling costs. Traditionally, specialized telephones that enabled multiparty conversations were used. With the introduction of high speed networks, we now have high definition videos that add more realism in the presence of counterparts who could be thousands of miles away. This paper presents a new technology that adds even more realism by telecommunicating with hand gestures. This technology is part of a teleconference system named SMS (Smart Meeting Space). In SMS, a person can use hand gestures to manipulate meeting data that could be in the form of text, audio, video or 3D shapes. Fer detecting hand gestures, a machine learning algorithm called SVM (Support Vector Machine) has been used. For the prototype system, a 3D interaction environment has been implemented with $OpenGL^{TM}$, where a 3D human skull model can be grasped and moved in 6-DOF during a remote conversation between distant persons.

Problem Inference System of Interactive Digital Contents Based on Visitor Facial Expression and Gesture Recognition (관람객 얼굴 표정 및 제스쳐 인식 기반 인터렉티브 디지털콘텐츠의 문제점 추론 시스템)

  • Kwon, Do-Hyung;Yu, Jeong-Min
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.375-377
    • /
    • 2019
  • 본 논문에서는 관람객 얼굴 표정 및 제스쳐 인식을 기반으로 인터렉티브 디지털콘텐츠의 문제점 추론 시스템을 제안한다. 관람객이 콘텐츠를 체험하고 다른 장소로 이동하기 전까지의 행동 패턴을 기준으로 삼아 4가지 문제점으로 분류한다. 문제점 분류을 위해 관람객이 콘텐츠 체험과정에서 나타낼 수 있는 얼굴 표정 3가지 종류와 제스쳐 5가지를 구분하였다. 실험에서는 입력된 비디오로부터 얼굴 및 손을 검출하기 위해 Adaboost algorithm을 사용하였고, mobilenet v1을 retraining하여 탐지모델을 생성 후 얼굴 표정 및 제스쳐를 검출했다. 이 연구를 통해 인터렉티브 디지털콘텐츠가 지니고 있는 문제점을 추론하여 향후 콘텐츠 개선과 제작에 사용자 중심 설계가 가능하도록 하고 양질의 콘텐츠 생산을 촉진 시킬 수 있을 것이다.

  • PDF

A Study on Machine Learning-Based Real-Time Gesture Classification Using EMG Data (EMG 데이터를 이용한 머신러닝 기반 실시간 제스처 분류 연구)

  • Ha-Je Park;Hee-Young Yang;So-Jin Choi;Dae-Yeon Kim;Choon-Sung Nam
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.57-67
    • /
    • 2024
  • This paper explores the potential of electromyography (EMG) as a means of gesture recognition for user input in gesture-based interaction. EMG utilizes small electrodes within muscles to detect and interpret user movements, presenting a viable input method. To classify user gestures based on EMG data, machine learning techniques are employed, necessitating the preprocessing of raw EMG data to extract relevant features. EMG characteristics can be expressed through formulas such as Integrated EMG (IEMG), Mean Absolute Value (MAV), Simple Square Integral (SSI), Variance (VAR), and Root Mean Square (RMS). Additionally, determining the suitable time for gesture classification is crucial, considering the perceptual, cognitive, and response times required for user input. To address this, segment sizes ranging from a minimum of 100ms to a maximum of 1,000ms are varied, and feature extraction is performed to identify the optimal segment size for gesture classification. Notably, data learning employs overlapped segmentation to reduce the interval between data points, thereby increasing the quantity of training data. Using this approach, the paper employs four machine learning models (KNN, SVC, RF, XGBoost) to train and evaluate the system, achieving accuracy rates exceeding 96% for all models in real-time gesture input scenarios with a maximum segment size of 200ms.

OWC based Smart TV Remote Controller Design Using Flashlight

  • Mariappan, Vinayagam;Lee, Minwoo;Choi, Byunghoon;Kim, Jooseok;Lee, Jisung;Choi, Seongjhin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.71-76
    • /
    • 2018
  • The technology convergence of television, communication, and computing devices enables the rich social and entertaining experience through Smart TV in personal living space. The powerful smart TV computing platform allows to provide various user interaction interfaces like IR remote control, web based control, body gesture based control, etc. The presently used smart TV interaction user control methods are not efficient and user-friendly to access different type of media content and services and strongly required advanced way to control and access to the smart TV with easy user interface. This paper propose the optical wireless communication (OWC) based remote controller design for Smart TV using smart device Flashlights. In this approach, the user smart device act as a remote controller with touch based interactive smart device application and transfer the user control interface data to smart TV trough Flashlight using visible light communication method. The smart TV built-in camera follows the optical camera communication (OCC) principle to decode data and control smart TV user access functions according. This proposed method is not harmful as radio frequency (RF) radiation does it on human health and very simple to use as well user does need to any gesture moves to control the smart TV.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

Human-Computer Natur al User Inter face Based on Hand Motion Detection and Tracking

  • Xu, Wenkai;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.501-507
    • /
    • 2012
  • Human body motion is a non-verbal part for interaction or movement that can be used to involves real world and virtual world. In this paper, we explain a study on natural user interface (NUI) in human hand motion recognition using RGB color information and depth information by Kinect camera from Microsoft Corporation. To achieve the goal, hand tracking and gesture recognition have no major dependencies of the work environment, lighting or users' skin color, libraries of particular use for natural interaction and Kinect device, which serves to provide RGB images of the environment and the depth map of the scene were used. An improved Camshift tracking algorithm is used to tracking hand motion, the experimental results show out it has better performance than Camshift algorithm, and it has higher stability and accuracy as well.

The Design and Implementation of Virtual Studio

  • Sul, Chang-Whan;Wohn, Kwang-Yoen
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.83-87
    • /
    • 1996
  • A virtual reality system using video image is designed and implemented. A participant having 2{{{{ { 1} over { 2} }}}}DOF can interact with the computer-generated virtual object using her/his full body posture and gesture in the 3D virtual environment. The system extracts the necessary participant-related information by video-based sensing, and simulates the realistic interaction such as collision detection in the virtual environment. The resulting scene obtained by compositing video image of the participant and virtual environment is updated in near real time.

  • PDF

Fast Hand-Gesture Recognition Algorithm For Embedded System (임베디드 시스템을 위한 고속의 손동작 인식 알고리즘)

  • Hwang, Dong-Hyun;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.7
    • /
    • pp.1349-1354
    • /
    • 2017
  • In this paper, we propose a fast hand-gesture recognition algorithm for embedded system. Existing hand-gesture recognition algorithm has a difficulty to use in a low performance system such as embedded systems and mobile devices because of high computational complexity of contour tracing method that extracts all points of hand contour. Instead of using algorithms based on contour tracing, the proposed algorithm uses concentric-circle tracing method to estimate the abstracted contour of fingers, then classify hand-gestures by extracting features. The proposed algorithm has an average recognition rate of 95% and an average execution time of 1.29ms, which shows a maximum performance improvement of 44% compared with algorithm using the existing contour tracing method. It is confirmed that the algorithm can be used in a low performance system such as embedded systems and mobile devices.