• Title/Summary/Keyword: Motion based interface

Search Result 309, Processing Time 0.039 seconds

Hand Gesture Interface Using Mobile Camera Devices (모바일 카메라 기기를 이용한 손 제스처 인터페이스)

  • Lee, Chan-Su;Chun, Sung-Yong;Sohn, Myoung-Gyu;Lee, Sang-Heon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.5
    • /
    • pp.621-625
    • /
    • 2010
  • This paper presents a hand motion tracking method for hand gesture interface using a camera in mobile devices such as a smart phone and PDA. When a camera moves according to the hand gesture of the user, global optical flows are generated. Therefore, robust hand movement estimation is possible by considering dominant optical flow based on histogram analysis of the motion direction. A continuous hand gesture is segmented into unit gestures by motion state estimation using motion phase, which is determined by velocity and acceleration of the estimated hand motion. Feature vectors are extracted during movement states and hand gestures are recognized at the end state of each gesture. Support vector machine (SVM), k-nearest neighborhood classifier, and normal Bayes classifier are used for classification. SVM shows 82% recognition rate for 14 hand gestures.

Web-based 3D Virtual Experience using Unity and Leap Motion (Unity와 Leap Motion을 이용한 웹 기반 3D 가상품평)

  • Jung, Ho-Kyun;Park, Hyungjun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.2
    • /
    • pp.159-169
    • /
    • 2016
  • In order to realize the virtual prototyping (VP) of digital products, it is important to provide the people involved in product development with the appropriate visualization and interaction of the products, and the vivid simulation of user interface (UI) behaviors in an interactive 3D virtual environment. In this paper, we propose an approach to web-based 3D virtual experience using Unity and Leap Motion. We adopt Unity as an implementation platform which easily and rapidly implements the visualization of the products and the design and simulation of their UI behaviors, and allows remote users to get an easy access to the virtual environment. Additionally, we combine Leap Motion with Unity to embody natural and immersive interaction using the user's hand gesture. Based on the proposed approach, we have developed a testbed system for web-based 3D virtual experience and applied it for the design evaluation of various digital products. Button selection test was done to investigate the quality of the interaction using Leap Motion, and a preliminary user study was also performed to show the usefulness of the proposed approach.

Flowchart Programming Environment for Process Control (PC 기반 제어기를 위한 Flowchart 활용 프로그래밍 환경의 개발)

  • 이희원;김기원;민병권;이상조;김찬봉
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.1240-1243
    • /
    • 2004
  • For agile production methods, manufacturing system requires development of a motion controller which has flexibility of general-purpose motion controller and productivity of specialized-purpose one. In this study we developed the Flowchart Programming development environment for Motion language and Process Control. The controller designed on this environment can be used as a general purpose motion controller of a machining tool. Design of control programming based on a flowchart has the advantage of reducing the time consumed and intuitive interface for users. We create the solution with the Microsoft Visio for the flowchart-based platform and OPC for the process communication..

  • PDF

Vision based Fast Hand Motion Recognition Method for an Untouchable User Interface of Smart Devices (스마트 기기의 비 접촉 사용자 인터페이스를 위한 비전 기반 고속 손동작 인식 기법)

  • Park, Jae Byung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.300-306
    • /
    • 2012
  • In this paper, we propose a vision based hand motion recognition method for an untouchable user interface of smart devices. First, an original color image is converted into a gray scaled image and its spacial resolution is reduced, taking the small memory and low computational power of smart devices into consideration. For robust recognition of hand motions through separation of horizontal and vertical motions, the horizontal principal area (HPA) and the vertical principal area (VPA) are defined respectively. From the difference images of the consecutively obtained images, the center of gravity (CoG) of the significantly changed pixels caused by hand motions is obtained, and the direction of hand motion is detected by defining the least mean squared line for the CoG in time. For verifying the feasibility of the proposed method, the experiments are carried out with a vision system.

Development of Motion Mechanisms for Health-Care Riding Robots (지능형 헬스케어 승마로봇의 모션 메카니즘 개발)

  • Kim, Jin-Soo;Lim, Mee-Seub;Lim, Joon-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2008.07a
    • /
    • pp.1735-1736
    • /
    • 2008
  • In this research, a riding robot system named as "RideBot" is developed for health-care and entertainments. The developed riding robot can follow the intention of horseman and can simulate the motion of horse. The riding robot mechanisms are used for many functions of attitude detection, motion sensing, recognition, common interface and motion-generations. This riding robot can react on health conditions, bio-signals and intention informations of user. One of the objectives of this research is that the riding robot could catch user motion and operate spontaneous movements. In this paper, we develope the saddle mechanism which can generate 3 degrees-of-freedom riding motion based on the intention of horseman. Also, we develope reins and spur mechanism for the recognition of the horseman's intention estimation and the bio-signal monitoring system for the health care function of a horseman. In order to evaluate the performance of the riding robot system, we tested several riding motions including slow and normal step motion, left and right turn motion.

  • PDF

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.

Gesture interface with 3D accelerometer for mobile users (모바일 사용자를 위한 3 차원 가속도기반 제스처 인터페이스)

  • Choe, Bong-Whan;Hong, Jin-Hyuk;Cho, Sung-Bae
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.378-383
    • /
    • 2009
  • In these days, many systems are equipped with people to infer their intention and provide the corresponding service. People always carry their own mobile device with various sensors, and the accelerator takes a role in this environment. The accelerator collects motion information, which is useful for the development of gesture-based user interfaces. Generally, it needs to develop an effective method for the mobile environment that supports relatively less computational capability since huge computation is required to recognize time-series patterns such as gestures. In this paper, we propose a 2-stage motion recognizer composed of low-level and high-level motions based on the motion library. The low-level motion recognizer uses the dynamic time warping with 3D acceleration data, and the high-level motion is defined linguistically with the low-level motions.

  • PDF

Development of UAV Teleoperation Virtual Environment Based-on GSM Networks and Real Weather Effects

  • AbdElHamid, Amr;Zong, Peng
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.3
    • /
    • pp.463-474
    • /
    • 2015
  • Future Ground Control Stations (GCSs) for Unmanned Aerial Vehicles (UAVs) teleoperation targets better situational awareness by providing extra motion cues to stimulate the vestibular system. This paper proposes a new virtual environment for long range Unmanned Aerial Vehicle (UAV) control via Non-Line-of-Sight (NLoS) communications, which is based on motion platforms. It generates motion cues for the teleoperator for extra sensory stimulation to enhance the guidance performance. The proposed environment employs the distributed component simulation over GSM network as a simulation platform. GSM communications are utilized as a multi-hop communication network, which is similar to global satellite communications. It considers a UAV mathematical model and wind turbulence effects to simulate a realistic UAV dynamics. Moreover, the proposed virtual environment simulates a Multiple Axis Rotating Device (MARD) as Human Machine Interface (HMI) device to provide a complete delay analysis. The demonstrated measurements cover Graphical User Interface (GUI) capabilities, NLoS GSM communications delay, MARD performance, and different software workload. The proposed virtual environment succeeded to provide visual and vestibular feedbacks for teleoperators via GSM networks. The overall system performance is acceptable relative to other Line-of-Sight (LoS) systems, which promises a good potential for future long range, medium altitude UAV teleoperation researches.

Analysis of Added Resistance using a Cartesian-Grid-based Computational Method (직교격자 기반 수치기법을 이용한 부가저항 해석)

  • Yang, Kyung-Kyu;Lee, Jae-Hoon;Nam, Bo-Woo;Kim, Yonghwan
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.50 no.2
    • /
    • pp.79-87
    • /
    • 2013
  • In this paper, an Euler equation solver based on a Cartesian-grid method and non-uniform staggered grid system is applied to predict the ship motion response and added resistance in waves. Water, air, and solid domains are identified by a volume-fraction function for each phase and in each cell. For capturing the interface between air and water, the tangent of hyperbola for interface capturing (THINC) scheme is used with a weighed line interface calculation (WLIC) method. The volume fraction of solid body embedded in a Cartesian-grid system is calculated by a level-set based algorithm, and the body boundary condition is imposed by volume weighted formula. Added resistance is calculated by direct pressure integration on the ship surface. Numerical simulations for a Wigley III hull and an S175 containership in regular waves have been carried out to validate the newly developed code, and the ship motion responses and added resistances are compared with experimental data. For S175 containership, grid convergence test has been conducted to investigate the sensitivity of grid spacing on the motion responses and added resistances.

Intuitive Spatial Drawing System based on Hand Interface (손 인터페이스 기반 직관적인 공간 드로잉 시스템)

  • Ko, Ginam;Kim, Serim;Kim, YoungEun;Nam, SangHun
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1615-1620
    • /
    • 2017
  • The development of Virtual Reality (VR)-related technologies has resulted in the improved performance of VR devices as well as affordable price arrangements, granting many users easy access to VR technology. VR drawing applications are not complicated for users and are also highly mature, being used for education, performances, and more. For controller-based spatial drawing interfaces, the user's drawing interface becomes constrained by the controller. This study proposes hand interaction based spatial drawing system where the user, who has never used the controller before, can intuitively use the drawing application by mounting LEAP Motion at the front of the Head Mounted Display (HMD). This traces the motion of the user's hand in front of the HMD to draw curved surfaces in virtual environments.