• Title/Summary/Keyword: Gesture based interface

Search Result 168, Processing Time 0.025 seconds

Development of Multi Card Touch based Interactive Arcade Game System (멀티 카드 터치기반 인터랙티브 아케이드 게임 시스템 구현)

  • Lee, Dong-Hoon;Jo, Jae-Ik;Yun, Tae-Soo
    • Journal of Korea Entertainment Industry Association
    • /
    • v.5 no.2
    • /
    • pp.87-95
    • /
    • 2011
  • Recently, the issue has been tangible game environment due to the various interactive interface developments. In this paper, we propose the multi card touch based interactive arcade system by using marker recognition interface and multi-touch interaction interface. For our system, the card's location and orientation information is recognized through DI-based recognition algorithm. In addition, the user's hand gesture tracking informations are provided by the various interaction metaphors. The system provides the user with a higher engagement offers a new experience. Therefore, our system will be used in the tangible arcade game machine.

Development of Hand Recognition Interface for Interactive Digital Signage (인터렉티브 디지털 사이니지를 위한 손 인식 인터페이스 개발)

  • Lee, Jung-Wun;Cha, Kyung-Ae;Ryu, Jeong-Tak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.3
    • /
    • pp.1-11
    • /
    • 2017
  • There is a Growing Interest in Motion Recognition for Recognizing Human Motion in Camera Images. As a Result, Researches are Being Actively Conducted to Control Digital Devices with Gestures at a Long Distance. The Interface Using Gesture can be Effectively Used in the Digital Signage Industry Where the Advertisement Effect is Expected to be Exposed to the Public in Various Places. Since the Digital Signage Contents can be Easily Controlled through the Non-contact Hand Operation, it is Possible to Provide the Advertisement Information of Interest to a Large Number of People, Thereby Providing an Opportunity to Lead to Sales. Therefore, we Propose a Digital Signage Content Control System Based on Hand Movement at a Certain Distance, which can be Effectively Used for the Development of Interactive Advertizing Media.

User-centric Immersible and Interactive Electronic Book based on the Interface of Tabletop Display (테이블탑 디스플레이 기반 사용자 중심의 실감형 상호작용 전자책)

  • Song, Dae-Hyeon;Park, Jae-Wan;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.6
    • /
    • pp.117-125
    • /
    • 2009
  • In this paper, we propose user-centric immersible and interactive electronic book based on the interface of tabletop display. Electronic book is usually used for users that want to read the text book with multimedia contents like video, audio, animation and etc. It is based on tabletop display platform then the conventional input device like keyboard and mouse is not essentially needed. Users can interact with the contents based on the gestures defined for the interface of tabletop display using hand finger touches then it gives superior and effective interface for users to use the electronic book interestingly. This interface supports multiple users then it gives more diverse effects on the conventional electronic contents just made for one user. In this paper our method gives new way for the conventional electronics book and it can define the user-centric gestures and help users to interact with the book easily. We expect our method can be utilized for many edutainment contents.

Human-Computer Natur al User Inter face Based on Hand Motion Detection and Tracking

  • Xu, Wenkai;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.501-507
    • /
    • 2012
  • Human body motion is a non-verbal part for interaction or movement that can be used to involves real world and virtual world. In this paper, we explain a study on natural user interface (NUI) in human hand motion recognition using RGB color information and depth information by Kinect camera from Microsoft Corporation. To achieve the goal, hand tracking and gesture recognition have no major dependencies of the work environment, lighting or users' skin color, libraries of particular use for natural interaction and Kinect device, which serves to provide RGB images of the environment and the depth map of the scene were used. An improved Camshift tracking algorithm is used to tracking hand motion, the experimental results show out it has better performance than Camshift algorithm, and it has higher stability and accuracy as well.

Design of OpenCV based Finger Recognition System using binary processing and histogram graph

  • Baek, Yeong-Tae;Lee, Se-Hoon;Kim, Ji-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.2
    • /
    • pp.17-23
    • /
    • 2016
  • NUI is a motion interface. It uses the body of the user without the use of HID device such as a mouse and keyboard to control the device. In this paper, we use a Pi Camera and sensors connected to it with small embedded board Raspberry Pi. We are using the OpenCV algorithms optimized for image recognition and computer vision compared with traditional HID equipment and to implement a more human-friendly and intuitive interface NUI devices. comparison operation detects motion, it proposed a more advanced motion sensors and recognition systems fused connected to the Raspberry Pi.

A Study on Structuring and Classification of Input Interaction

  • Pan, Young-Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.493-498
    • /
    • 2012
  • Objective: The purpose of this study is to suggest the hierarchical structure with three layers of input task, input interaction, and input device. Background: Understanding the input interaction is very helpful to design an interface design. Method: We made a model of three layered input structure based on empirical approach and applied to a gesture interaction in TV. Result: We categorized the input tasks into six elementary tasks which are select, position, orient, text, and quantify. The five interactions described in this paper could accomplish the full range of input interaction, although the criteria for classification were not consistent. We analyzed the Microsoft kinect with this structure. Conclusion: The input interactions of command, 4 way, cursor, touch, and intelligence are basic interaction structure to understanding input system. Application: It is expected the model can be used to design a new input interaction and user interface.

MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition (손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.83-95
    • /
    • 2014
  • Hand posture recognition is an important technique to enable a natural and familiar interface in HCI(human computer interaction) field. In this paper, we introduce a hand posture recognition method by using a depth camera. Moreover, the hand posture recognition method is incorporated with MPEG-U based advanced user interaction (AUI) interface system, which can provide a natural interface with a variety of devices. The proposed method initially detects positions and lengths of all fingers opened and then it recognizes hand posture from pose of one or two hands and the number of fingers folded when user takes a gesture representing a pattern of AUI data format specified in the MPEG-U part 2. The AUI interface system represents user's hand posture as compliant MPEG-U schema structure. Experimental results show performance of the hand posture recognition and it is verified that the AUI interface system is compatible with the MPEG-U standard.

An Implementation of Dynamic Gesture Recognizer Based on WPS and Data Glove (WPS와 장갑 장치 기반의 동적 제스처 인식기의 구현)

  • Kim, Jung-Hyun;Roh, Yong-Wan;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.561-568
    • /
    • 2006
  • WPS(Wearable Personal Station) for next generation PC can define as a core terminal of 'Ubiquitous Computing' that include information processing and network function and overcome spatial limitation in acquisition of new information. As a way to acquire significant dynamic gesture data of user from haptic devices, traditional gesture recognizer based on desktop-PC using wire communication module has several restrictions such as conditionality on space, complexity between transmission mediums(cable elements), limitation of motion and incommodiousness on use. Accordingly, in this paper, in order to overcome these problems, we implement hand gesture recognition system using fuzzy algorithm and neural network for Post PC(the embedded-ubiquitous environment using blue-tooth module and WPS). Also, we propose most efficient and reasonable hand gesture recognition interface for Post PC through evaluation and analysis of performance about each gesture recognition system. The proposed gesture recognition system consists of three modules: 1) gesture input module that processes motion of dynamic hand to input data 2) Relational Database Management System(hereafter, RDBMS) module to segment significant gestures from input data and 3) 2 each different recognition modulo: fuzzy max-min and neural network recognition module to recognize significant gesture of continuous / dynamic gestures. Experimental result shows the average recognition rate of 98.8% in fuzzy min-nin module and 96.7% in neural network recognition module about significantly dynamic gestures.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF