• Title/Summary/Keyword: hand gesture interface

Search Result 115, Processing Time 0.019 seconds

Web-based 3D Virtual Experience using Unity and Leap Motion (Unity와 Leap Motion을 이용한 웹 기반 3D 가상품평)

  • Jung, Ho-Kyun;Park, Hyungjun
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.2
    • /
    • pp.159-169
    • /
    • 2016
  • In order to realize the virtual prototyping (VP) of digital products, it is important to provide the people involved in product development with the appropriate visualization and interaction of the products, and the vivid simulation of user interface (UI) behaviors in an interactive 3D virtual environment. In this paper, we propose an approach to web-based 3D virtual experience using Unity and Leap Motion. We adopt Unity as an implementation platform which easily and rapidly implements the visualization of the products and the design and simulation of their UI behaviors, and allows remote users to get an easy access to the virtual environment. Additionally, we combine Leap Motion with Unity to embody natural and immersive interaction using the user's hand gesture. Based on the proposed approach, we have developed a testbed system for web-based 3D virtual experience and applied it for the design evaluation of various digital products. Button selection test was done to investigate the quality of the interaction using Leap Motion, and a preliminary user study was also performed to show the usefulness of the proposed approach.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

Inexpensive Visual Motion Data Glove for Human-Computer Interface Via Hand Gesture Recognition (손 동작 인식을 통한 인간 - 컴퓨터 인터페이스용 저가형 비주얼 모션 데이터 글러브)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.341-346
    • /
    • 2009
  • The motion data glove is a representative human-computer interaction tool that inputs human hand gestures to computers by measuring their motions. The motion data glove is essential equipment used for new computer technologiesincluding home automation, virtual reality, biometrics, motion capture. For its popular usage, this paper attempts to develop an inexpensive visual.type motion data glove that can be used without any special equipment. The proposed approach has the special feature; it can be developed as a low-cost one becauseof not using high-cost motion-sensing fibers that were used in the conventional approaches. That makes its easy production and popular use possible. This approach adopts a visual method that is obtained by improving conventional optic motion capture technology, instead of mechanical method using motion-sensing fibers. Compared to conventional visual methods, the proposed method has the following advantages and originalities Firstly, conventional visual methods use many cameras and equipments to reconstruct 3D pose with eliminating occlusions But the proposed method adopts a mono vision approachthat makes simple and low cost equipments possible. Secondly, conventional mono vision methods have difficulty in reconstructing 3D pose of occluded parts in images because they have weak points about occlusions. But the proposed approach can reconstruct occluded parts in images by using originally designed thin-bar-shaped optic indicators. Thirdly, many cases of conventional methods use nonlinear numerical computation image analysis algorithm, so they have inconvenience about their initialization and computation times. But the proposed method improves these inconveniences by using a closed-form image analysis algorithm that is obtained from original formulation. Fourthly, many cases of conventional closed-form algorithms use approximations in their formulations processes, so they have disadvantages of low accuracy and confined applications due to singularities. But the proposed method improves these disadvantages by original formulation techniques where a closed-form algorithm is derived by using exponential-form twist coordinates, instead of using approximations or local parameterizations such as Euler angels.

Object Detection and Optical Character Recognition for Mobile-based Air Writing (모바일 기반 Air Writing을 위한 객체 탐지 및 광학 문자 인식 방법)

  • Kim, Tae-Il;Ko, Young-Jin;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.53-63
    • /
    • 2019
  • To provide a hand gesture interface through deep learning in mobile environments, research on the light-weighting of networks is essential for high recognition rates while at the same time preventing degradation of execution speed. This paper proposes a method of real-time recognition of written characters in the air using a finger on mobile devices through the light-weighting of deep-learning model. Based on the SSD (Single Shot Detector), which is an object detection model that utilizes MobileNet as a feature extractor, it detects index finger and generates a result text image by following fingertip path. Then, the image is sent to the server to recognize the characters based on the learned OCR model. To verify our method, 12 users tested 1,000 words using a GALAXY S10+ and recognized their finger with an average accuracy of 88.6%, indicating that recognized text was printed within 124 ms and could be used in real-time. Results of this research can be used to send simple text messages, memos, and air signatures using a finger in mobile environments.

Image Processing Algorithms for DI-method Multi Touch Screen Controllers (DI 방식의 대형 멀티터치스크린을 위한 영상처리 알고리즘 설계)

  • Kang, Min-Gu;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.1-12
    • /
    • 2011
  • Large-sized multi-touch screen is usually made using infrared rays. That is because it has technical constraints or cost problems to make the screen with the other ways using such as existing resistive overlays, capacitive overlay, or acoustic wave. Using infrared rays to make multi-touch screen is easy, but is likely to have technical limits to be implemented. To make up for these technical problems, two other methods were suggested through Surface project, which is a next generation user-interface concept of Microsoft. One is Frustrated Total Internal Reflection (FTIR) which uses infrared cameras, the other is Diffuse Illumination (DI). FTIR and DI are easy to be implemented in large screens and are not influenced by the number of touch points. Although FTIR method has an advantage in detecting touch-points, it also has lots of disadvantages such as screen size limit, quality of the materials, the module for infrared LED arrays, and high consuming power. On the other hand, DI method has difficulty in detecting touch-points because of it's structural problems but makes it possible to solve the problem of FTIR. In this thesis, we study the algorithms for effectively correcting the distort phenomenon of optical lens, and image processing algorithms in order to solve the touch detecting problem of the original DI method. Moreover, we suggest calibration algorithms for improving the accuracy of multi-touch, and a new tracking technique for accurate movement and gesture of the touch device. To verify our approaches, we implemented a table-based multi touch screen.