• Title/Summary/Keyword: Motion Gesture Recognition

Search Result 134, Processing Time 0.029 seconds

Visual Touchless User Interface for Window Manipulation (윈도우 제어를 위한 시각적 비접촉 사용자 인터페이스)

  • Kim, Jin-Woo;Jung, Kyung-Boo;Jeong, Seung-Do;Choi, Byung-Uk
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.471-478
    • /
    • 2009
  • Recently, researches for user interface are remarkably processed due to the explosive growth of 3-dimensional contents and applications, and the spread class of computer user. This paper proposes a novel method to manipulate windows efficiently using only the intuitive motion of hand. Previous methods have some drawbacks such as burden of expensive device, high complexity of gesture recognition, assistance of additional information using marker, and so on. To improve the defects, we propose a novel visual touchless interface. First, we detect hand region using hue channel in HSV color space to control window using hand. The distance transform method is applied to detect centroid of hand and curvature of hand contour is used to determine position of fingertips. Finally, by using the hand motion information, we recognize hand gesture as one of predefined seven motions. Recognized hand gesture is to be a command to control window. In the proposed method, user can manipulate windows with sense of depth in the real environment because the method adopts stereo camera. Intuitive manipulation is also available because the proposed method supports visual touch for the virtual object, which user want to manipulate, only using simple motions of hand. Finally, the efficiency of the proposed method is verified via an application based on our proposed interface.

A Study on Comparative Experiment of Hand-based Interface in Immersive Virtua Reality (몰입형 가상현실에서 손 기반 인터페이스의 비교 실험에 관한 연구)

  • Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.2
    • /
    • pp.1-9
    • /
    • 2019
  • This study compares hand-based interfaces to improve a user's virtual reality (VR) presence by enhancing user immersion in VR interactions. To provide an immersive experience, in which users can more directly control the virtual environment and objects within that environment using their hands and, to simultaneously minimize the device burden on users using immersive VR systems, we designed two experimental interfaces (hand motion recognition sensor- and controller-based interactions). Hand motion recognition sensor-based interaction reflects accurate hand movements, direct gestures, and motion representations in the virtual environment, and it does not require using a device in addition to the VR head mounted display (HMD). Controller-based interaction designs a generalized interface that maps the gesture to the controller's key for easy access to the controller provided with the VR HMD. The comparative experiments in this study confirm the convenience and intuitiveness of VR interactions using the user's hand.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

Volume Motion Template For View Independent Gesture Recognition (시점에 독립적인 제스처 인식을 위한 볼륨 모션 템플릿)

  • Shin H.-K.;Lee S.-W.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.844-846
    • /
    • 2005
  • 본 논문은 시점에 독립적인 제스처 인식을 위하여 볼륨 모션 템플릿을 제안한다. 기존 제스처 연구에서 시점 문제와 행동 속도의 편차는 중요하면서도 어려운 문제이다. 첫째, 시점 문제는 하나의 단안 카메라나 스테레오 카메라를 이용하는 단방향 카메라 환경에서 발생하며 해결하기 어려운 문제이다. 모든 시점에서 학습시켜야 하는 기존 연구의 단점을 해결하기 위해, 다양한 시점입력에 독립적으로 인식을 할 수 있는 볼륨 모션 템플릿을 제안한다. 볼륨 모션 템플릿은 깊이 정보와 모션의 방향성 통해 최적의 가상 시점을 제공한다. 또한 볼륨 모션 템플릿을 이용하여 시스템의 신뢰성과 확장성 또한 개선하였다. 두 번째, 제스처가 발생 시마다 생기는 속도의 편차 문제이다. 입력 제스처의 시간-정규화를 통해 해결할 수 있는데, 시간 정보 대신 모션 량을 사용하여 이를 해결하였다. 볼륨 모션 템플릿을 이용하여 다양한 시점 입력에 대해 실험하였고, 기존 모션 히스토리 이미지와 비교하여 시점에 독립적인 결과를 얻었다.

  • PDF

Hand Motion Gesture Recognition at A Distance with Skin-color Detection and Feature Points Tracking (피부색 검출 및 특징점 추적을 통한 원거리 손 모션 제스처 인식)

  • Yun, Jong-Hyun;Kim, Sung-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.594-596
    • /
    • 2012
  • 본 논문에서는 손 모션에 대하여 피부색 검출을 기반으로 전역적인 모션을 추적하고 모션 벡터를 생성하여 제스처를 인식하는 방법을 제안한다. 추적을 위하여 Shi-Tomasi 특징점 검출 방법과 Lucas-Kanade 옵티컬 플로우 추정 방법을 사용한다. 손 모션을 추적하는 경우 손의 모양이 다양하게 변화하므로 초기에 검출된 특징점을 계속적으로 추적하는 일반적인 방법으로는 손의 모션을 제대로 추적할 수 없다. 이에 본 논문에서는 프레임마다 새로운 특징점을 검출한 후 옵티컬 플로우를 추정하고 이상치(outlier)를 제거하여 손 모양의 변화에도 추적을 통한 모션 벡터 생성이 가능하도록 한다. 모션 벡터들로 인공 신경망을 사용한 판별 과정을 수행하여 최종적으로 손 모션 제스처에 대한 인식이 가능하도록 한다.

Control of Ubiquitous Environment using Sensors Module (센서모듈을 이용한 유비쿼터스 환경의 제어)

  • Jung, Tae-Min;Choi, Woo-Kyung;Kim, Seong-Joo;Jeon, Hong-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.190-195
    • /
    • 2007
  • As Ubiquitous era comes, it became necessary to construct environment which can provide more useful information to human in the spaces where people live like homes or offices. On this account, network of the peripheral devices of Ubiquitous should constitute efficiently. For it, this paper researched human pattern by classified motion recognition using sensors module data. (This data processing by Neural network and fuzzy algorithm.) This pattern classification can help control home network system communication. I suggest the system which can control home network system more easily through patterned movement, and control Ubiquitous environment by grasp human's movement and condition.

Comparative Behavior Analysis in Love Model with Same and Different Time Delay (동일 시간 지연과 서로 다른 시간 지연을 갖는 사랑모델에서의 비교 거동 해석)

  • Huang, Linyun;Ba, Young-Chul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.3
    • /
    • pp.210-216
    • /
    • 2015
  • It is well known that the structure of brain and consciousness of human have a phenomena of complex system. The human emotion have a many kind. The love is one of human emotion, which have been studied in sociology and psychology as a matter of great interested thing. In this paper, we consider a same and different time delay in love equation of Romeo and Juliet. We represent a behavior of love as a time series and phase portrait, and analyze the difference of behaviors between a same and different time delay.

A Study on the Development of Multi-User Virtual Reality Moving Platform Based on Hybrid Sensing (하이브리드 센싱 기반 다중참여형 가상현실 이동 플랫폼 개발에 관한 연구)

  • Jang, Yong Hun;Chang, Min Hyuk;Jung, Ha Hyoung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.3
    • /
    • pp.355-372
    • /
    • 2021
  • Recently, high-performance HMDs (Head-Mounted Display) are becoming wireless due to the growth of virtual reality technology. Accordingly, environmental constraints on the hardware usage are reduced, enabling multiple users to experience virtual reality within a single space simultaneously. Existing multi-user virtual reality platforms use the user's location tracking and motion sensing technology based on vision sensors and active markers. However, there is a decrease in immersion due to the problem of overlapping markers or frequent matching errors due to the reflected light. Goal of this study is to develop a multi-user virtual reality moving platform in a single space that can resolve sensing errors and user immersion decrease. In order to achieve this goal hybrid sensing technology was developed, which is the convergence of vision sensor technology for position tracking, IMU (Inertial Measurement Unit) sensor motion capture technology and gesture recognition technology based on smart gloves. In addition, integrated safety operation system was developed which does not decrease the immersion but ensures the safety of the users and supports multimodal feedback. A 6 m×6 m×2.4 m test bed was configured to verify the effectiveness of the multi-user virtual reality moving platform for four users.

NUI/NUX framework based on intuitive hand motion (직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크)

  • Lee, Gwanghyung;Shin, Dongkyoo;Shin, Dongil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.11-19
    • /
    • 2014
  • The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

A Study on the Windows Application Control Model Based on Leap Motion (립모션 기반의 윈도우즈 애플리케이션 제어 모델에 관한 연구)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.111-116
    • /
    • 2019
  • With recent rapid development of computer capabilities, various technologies that can facilitate the interaction between humans and computers are being studied. The paradigm tends to change to NUI using the body such as 3D motion, haptics, and multi-touch with GUI using traditional input devices. Various studies have been conducted on transferring human movements to computers using sensors. In addition to the development of optical sensors that can acquire 3D objects, the range of applications in the industrial, medical, and user interface fields has been expanded. In this paper, I provide a model that can execute other programs through gestures instead of the mouse, which is the default input device, and control Windows based on the lip motion. To propose a model which converges with an Android application and can be controlled by various media and voice instruction functions using voice recognition and buttons through connection with a main client. It is expected that Internet media such as video and music can be controlled not only by a client computer but also by an application at a long distance and that convenient media viewing can be performed through the proposal model.