• Title/Summary/Keyword: Gesture Interface

Search Result 231, Processing Time 0.027 seconds

Development of Hand Recognition Interface for Interactive Digital Signage (인터렉티브 디지털 사이니지를 위한 손 인식 인터페이스 개발)

  • Lee, Jung-Wun;Cha, Kyung-Ae;Ryu, Jeong-Tak
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.3
    • /
    • pp.1-11
    • /
    • 2017
  • There is a Growing Interest in Motion Recognition for Recognizing Human Motion in Camera Images. As a Result, Researches are Being Actively Conducted to Control Digital Devices with Gestures at a Long Distance. The Interface Using Gesture can be Effectively Used in the Digital Signage Industry Where the Advertisement Effect is Expected to be Exposed to the Public in Various Places. Since the Digital Signage Contents can be Easily Controlled through the Non-contact Hand Operation, it is Possible to Provide the Advertisement Information of Interest to a Large Number of People, Thereby Providing an Opportunity to Lead to Sales. Therefore, we Propose a Digital Signage Content Control System Based on Hand Movement at a Certain Distance, which can be Effectively Used for the Development of Interactive Advertizing Media.

User-Defined Hand Gestures for Small Cylindrical Displays (소형 원통형 디스플레이를 위한 사용자 정의 핸드 제스처)

  • Kim, Hyoyoung;Kim, Heesun;Lee, Dongeon;Park, Ji-hyung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.74-87
    • /
    • 2017
  • This paper aims to elicit user-defined hand gestures for the small cylindrical displays with flexible displays which has not emerged as a product yet. For this, we first defined the size and functions of a small cylindrical display, and elicited the tasks for operating its functions. Henceforward we implemented the experiment environment which is similar to real cylindrical display usage environment by developing both of a virtual cylindrical display interface and a physical object for operating the virtual cylindrical display. And we showed the results of each task in the virtual cylindrical display to the participants so they could define the hand gestures which are suitable for each task in their opinion. We selected the representative gestures for each task by choosing the gestures of the largest group in each task, and we also calculated agreement scores for each task. Finally we observed mental model of the participants which was applied for eliciting the gestures, based on analyzing the gestures and interview results from the participants.

OWC based Smart TV Remote Controller Design Using Flashlight

  • Mariappan, Vinayagam;Lee, Minwoo;Choi, Byunghoon;Kim, Jooseok;Lee, Jisung;Choi, Seongjhin
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.71-76
    • /
    • 2018
  • The technology convergence of television, communication, and computing devices enables the rich social and entertaining experience through Smart TV in personal living space. The powerful smart TV computing platform allows to provide various user interaction interfaces like IR remote control, web based control, body gesture based control, etc. The presently used smart TV interaction user control methods are not efficient and user-friendly to access different type of media content and services and strongly required advanced way to control and access to the smart TV with easy user interface. This paper propose the optical wireless communication (OWC) based remote controller design for Smart TV using smart device Flashlights. In this approach, the user smart device act as a remote controller with touch based interactive smart device application and transfer the user control interface data to smart TV trough Flashlight using visible light communication method. The smart TV built-in camera follows the optical camera communication (OCC) principle to decode data and control smart TV user access functions according. This proposed method is not harmful as radio frequency (RF) radiation does it on human health and very simple to use as well user does need to any gesture moves to control the smart TV.

Real-Time Recognition Method of Counting Fingers for Natural User Interface

  • Lee, Doyeob;Shin, Dongkyoo;Shin, Dongil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2363-2374
    • /
    • 2016
  • Communication occurs through verbal elements, which usually involve language, as well as non-verbal elements such as facial expressions, eye contact, and gestures. In particular, among these non-verbal elements, gestures are symbolic representations of physical, vocal, and emotional behaviors. This means that gestures can be signals toward a target or expressions of internal psychological processes, rather than simply movements of the body or hands. Moreover, gestures with such properties have been the focus of much research for a new interface in the NUI/NUX field. In this paper, we propose a method for recognizing the number of fingers and detecting the hand region based on the depth information and geometric features of the hand for application to an NUI/NUX. The hand region is detected by using depth information provided by the Kinect system, and the number of fingers is identified by comparing the distance between the contour and the center of the hand region. The contour is detected using the Suzuki85 algorithm, and the number of fingers is calculated by detecting the finger tips in a location at the maximum distance to compare the distances between three consecutive dots in the contour and the center point of the hand. The average recognition rate for the number of fingers is 98.6%, and the execution time is 0.065 ms for the algorithm used in the proposed method. Although this method is fast and its complexity is low, it shows a higher recognition rate and faster recognition speed than other methods. As an application example of the proposed method, this paper explains a Secret Door that recognizes a password by recognizing the number of fingers held up by a user.

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

Verification Process for Stable Human Detection and Tracking (안정적 사람 검출 및 추적을 위한 검증 프로세스)

  • Ahn, Jung-Ho;Choi, Jong-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.3
    • /
    • pp.202-208
    • /
    • 2011
  • Recently the technologies that control the computer system through human computer interaction(HCI) have been widely studied. Their applications usually involve the methods that locate user's positions via face detection and recognize user's gestures, but face detection performance is not good enough. In case that the applications do not locate user's position stably, user interface performance, such as gesture recognition, is significantly decreased. In this paper we propose a new stable face detection algorithm using skin color detection and cumulative distribution of face detection results, whose effectiveness was verified by experiments. The propsed algorithm can be applicable in the area of human tracking that uses correspondence matrix analysis.

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

A Study on the Ubiquitous Home Network Interface System by Application of User's Gesture Recognition Method (사용자 제스처 인식을 활용한 유비쿼터스 홈 네트워크 인터페이스 체계에 대한 연구)

  • Park In-Chan;Kim Sun-Chul
    • Science of Emotion and Sensibility
    • /
    • v.8 no.3
    • /
    • pp.265-276
    • /
    • 2005
  • 현재의 유비쿼터스 환경의 홈 네트워크 제품 사용자는 단일 사용자가 아닌 다수의 사용자가 사용하는 네트워크 행태를 취하고 있다. 변화하는 사용환경과 시스템들은 현재와는 다른 요구사항을 가지고 있으며, 이에 따른 사용자 중심의 디자인과 제품 인터페이스 체계의 연구활동은 국내외에서 활발하게 이루어지고 있다. 다양한 모바일 디바이스 및 홈 네트워크 제품의 보급화가 빠르게 성장하면서 이를 쉽게 제어하기 위한 다양한 제어방식이 연구되고 있다. 이중 음성인식기술을 비롯한 표정은 안면표정인식기술의 개발이 활발히 진행되고 있다. 모션감지 센서를 활용한 사용자 제스처 콘트롤 체계는 아직까지는 초보적인 단계에 있으나, 제품 제어에 있어서 향후 근미래에는 자연스러운 인터랙티브 인터페이스의 활용도가 높아질 전망이다. 이에 본 연구에서는 효과적인 디바이스 제어를 위한 제스처 유형의 자연스러운 사용언어체계 개발 방법 및 결과 그리고 사용자 맨탈모델와 메타포 실험을 통한 연구내용을 정리하였다. 기존 사용자의 제스처 유형의 자연스러운 사용언어를 분석하면서 디바이스 제어방식으로서 활용 가능성을 검토할 수 있었으며, 동작 감지 카메라 및 센서를 활용한 새로운 디바이스 제어방식 개발과정의 연구를 통하여 제스처 유형의 자연스러운 언어 체계 개발 및 과정을 정립하였다.

  • PDF

MPEG-U based Advanced User Interaction Interface System Using Hand Posture Recognition (손 자세 인식을 이용한 MPEG-U 기반 향상된 사용자 상호작용 인터페이스 시스템)

  • Han, Gukhee;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.83-95
    • /
    • 2014
  • Hand posture recognition is an important technique to enable a natural and familiar interface in HCI(human computer interaction) field. In this paper, we introduce a hand posture recognition method by using a depth camera. Moreover, the hand posture recognition method is incorporated with MPEG-U based advanced user interaction (AUI) interface system, which can provide a natural interface with a variety of devices. The proposed method initially detects positions and lengths of all fingers opened and then it recognizes hand posture from pose of one or two hands and the number of fingers folded when user takes a gesture representing a pattern of AUI data format specified in the MPEG-U part 2. The AUI interface system represents user's hand posture as compliant MPEG-U schema structure. Experimental results show performance of the hand posture recognition and it is verified that the AUI interface system is compatible with the MPEG-U standard.

Use of a gesture user interface as a touchless image navigation system in dental surgery: Case series report

  • Rosa, Guillermo M.;Elizondo, Maria L.
    • Imaging Science in Dentistry
    • /
    • v.44 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • Purpose: The purposes of this study were to develop a workstation computer that allowed intraoperative touchless control of diagnostic and surgical images by a surgeon, and to report the preliminary experience with the use of the system in a series of cases in which dental surgery was performed. Materials and Methods: A custom workstation with a new motion sensing input device (Leap Motion) was set up in order to use a natural user interface (NUI) to manipulate the imaging software by hand gestures. The system allowed intraoperative touchless control of the surgical images. Results: For the first time in the literature, an NUI system was used for a pilot study during 11 dental surgery procedures including tooth extractions, dental implant placements, and guided bone regeneration. No complications were reported. The system performed very well and was very useful. Conclusion: The proposed system fulfilled the objective of providing touchless access and control of the system of images and a three-dimensional surgical plan, thus allowing the maintenance of sterile conditions. The interaction between surgical staff, under sterile conditions, and computer equipment has been a key issue. The solution with an NUI with touchless control of the images seems to be closer to an ideal. The cost of the sensor system is quite low; this could facilitate its incorporation into the practice of routine dental surgery. This technology has enormous potential in dental surgery and other healthcare specialties.