• Title/Summary/Keyword: 3D gesture recognition technology

Search Result 38, Processing Time 0.027 seconds

Gesture Recognition Using a 3D Skeleton Model (3D Skeleton Model을 이용한 제스처 인식)

  • Ahn, Yang-Keun;Jung, Kwnag-Mo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1677-1678
    • /
    • 2015
  • 본 논문에서는 3D Skeleton Model로부터 획득된 관절 정보를 이용하여 제스처를 인식할 수 있는 방법을 제안한다. 사람의 신체 크기나 비율은 다르더라도 구조는 같다는 사실을 바탕으로, 관절과 관절이 이루는 각도를 이용해 제스처를 인식한다. 몇 가지 제스처를 선정한 뒤, 실험을 통해 제안한 방법의 인식률을 측정해 보았다. 또한 동적 제스처 인식을 위한 기초를 다지기 위해 이동 방향과 이동 거리, 이동 위치를 측정하는 실험을 해 보았다.

Gesture based Input Device: An All Inertial Approach

  • Chang Wook;Bang Won-Chul;Choi Eun-Seok;Yang Jing;Cho Sung-Jung;Cho Joon-Kee;Oh Jong-Koo;Kim Dong-Yoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.230-245
    • /
    • 2005
  • In this paper, we develop a gesture-based input device equipped with accelerometers and gyroscopes. The sensors measure the inertial measurements, i.e., accelerations and angular velocities produced by the movement of the system when a user is inputting gestures on a plane surface or in a 3D space. The gyroscope measurements are integrated to give orientation of the device and consequently used to compensate the accelerations. The compensated accelerations are doubly integrated to yield the position of the device. With this approach, a user's gesture input trajectories can be recovered without any external sensors. Three versions of motion tracking algorithms are provided to cope with wide spectrum of applications. Then, a Bayesian network based recognition system processes the recovered trajectories to identify the gesture class. Experimental results convincingly show the feasibility and effectiveness of the proposed gesture input device. In order to show practical use of the proposed input method, we implemented a prototype system, which is a gesture-based remote controller (Magic Wand).

Development of a Hand~posture Recognition System Using 3D Hand Model (3차원 손 모델을 이용한 비전 기반 손 모양 인식기의 개발)

  • Jang, Hyo-Young;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.219-221
    • /
    • 2007
  • Recent changes to ubiquitous computing requires more natural human-computer(HCI) interfaces that provide high information accessibility. Hand-gesture, i.e., gestures performed by one 'or two hands, is emerging as a viable technology to complement or replace conventional HCI technology. This paper deals with hand-posture recognition. Hand-posture database construction is important in hand-posture recognition. Human hand is composed of 27 bones and the movement of each joint is modeled by 23 degrees of freedom. Even for the same hand-posture,. grabbed images may differ depending on user's characteristic and relative position between the hand and cameras. To solve the difficulty in defining hand-postures and construct database effective in size, we present a method using a 3D hand model. Hand joint angles for each hand-posture and corresponding silhouette images from many viewpoints by projecting the model into image planes are used to construct the ?database. The proposed method does not require additional equations to define movement constraints of each joint. Also using the method, it is easy to get images of one hand-posture from many vi.ewpoints and distances. Hence it is possible to construct database more precisely and concretely. The validity of the method is evaluated by applying it to the hand-posture recognition system.

  • PDF

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

Study on the Hand Gesture Recognition System and Algorithm based on Millimeter Wave Radar (밀리미터파 레이더 기반 손동작 인식 시스템 및 알고리즘에 관한 연구)

  • Lee, Youngseok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.251-256
    • /
    • 2019
  • In this paper we proposed system and algorithm to recognize hand gestures based on the millimeter wave that is in 65GHz bandwidth. The proposed system is composed of millimeter wave radar board, analog to data conversion and data capture board and notebook to perform gesture recognition algorithms. As feature vectors in proposed algorithm. we used global and local zernike moment descriptor which are robust to distort by rotation of scaling of 2D data. As Experimental result, performance of the proposed algorithm is evaluated and compared with those of algorithms using single global or local zernike descriptor as feature vectors. In analysis of confusion matrix of algorithms, the proposed algorithm shows the better performance in comparison of precision, accuracy and sensitivity, subsequently total performance index of our method is 95.6% comparing with another two mehods in 88.4% and 84%.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

Study on Virtual Reality (VR) Operating System Prototype (가상환경(VR) 운영체제 프로토타입 연구)

  • Kim, Eunsol;Kim, Jiyeon;Yoo, Eunjin;Park, Taejung
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.87-94
    • /
    • 2017
  • This paper presents a prototype for virtual reality operating system (VR OS) concept with head mount display (HMD) and hand gesture recognition technology based on game engine (Unity3D). We have designed and implemented simple multitasking thread mechanism constructed on the realtime environment provided by Unity3D game engine. Our virtual reality operating system receives user input from the hand gesture recognition device (Leap Motion) to simulate mouse and keyboard and provides output via head mount display (Oculus Rift DK2). As a result, our system provides users with more broad and immersive work environment by implementing 360 degree work space.

An Experimental Research on the Usability of Indirect Control using Finger Gesture Interaction in Three Dimensional Space (3차원 공간에서 손가락 제스쳐 인터랙션을 이용한 간접제어의 사용성에 관한 실험연구)

  • Ham, Kyung Sun;Lee, Dahye;Hong, Hee Jung;Park, Sungjae;Kim, Jinwoo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.519-532
    • /
    • 2014
  • The emerging technologies for the natural computer interaction can give manufacturers new opportunities of product innovation. This paper is the study on a method of human communication about a finger gestures interaction. As technological advance has been so rapid over the last few decades, the utilizing products or services will be soon popular. The purpose of this experiment are as follows; What is the usefulness of gesture interaction? What is the cognitive impact on gesture interaction users. The finger gestures interaction consist of poking, picking and grasping. By measuring each usability in 2D and 3D space, this study shows the effect of finger gestures interaction. The 2D and 3D experimental tool is developed by using LeapMotion technology. As a results, the experiments involved 48 subjects shows that there is no difference in usability between the gestures in 2D space but in 3D space, the meaningful difference has been found. In addition, all gestures express good usability in 2D space rather than 3D space. Especially, there are the attractive interest that using uni-finger is better than multi-fingers.

Recent Technologies for the Acquisition and Processing of 3D Images Based on Deep Learning (딥러닝기반 입체 영상의 획득 및 처리 기술 동향)

  • Yoon, M.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.112-122
    • /
    • 2020
  • In 3D computer graphics, a depth map is an image that provides information related to the distance from the viewpoint to the subject's surface. Stereo sensors, depth cameras, and imaging systems using an active illumination system and a time-resolved detector can perform accurate depth measurements with their own light sources. The 3D image information obtained through the depth map is useful in 3D modeling, autonomous vehicle navigation, object recognition and remote gesture detection, resolution-enhanced medical images, aviation and defense technology, and robotics. In addition, the depth map information is important data used for extracting and restoring multi-view images, and extracting phase information required for digital hologram synthesis. This study is oriented toward a recent research trend in deep learning-based 3D data analysis methods and depth map information extraction technology using a convolutional neural network. Further, the study focuses on 3D image processing technology related to digital hologram and multi-view image extraction/reconstruction, which are becoming more popular as the computing power of hardware rapidly increases.

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF