• 제목/요약/키워드: gesture

검색결과 934건 처리시간 0.02초

Proposal of Camera Gesture Recognition System Using Motion Recognition Algorithm

  • Moon, Yu-Sung;Kim, Jung-Won
    • 전기전자학회논문지
    • /
    • 제26권1호
    • /
    • pp.133-136
    • /
    • 2022
  • This paper is about motion gesture recognition system, and proposes the following improvement to the flaws of the current system: a motion gesture recognition system and such algorithm that uses the video image of the entire hand and reading its motion gesture to advance the accuracy of recognition. The motion gesture recognition system includes, an image capturing unit that captures and obtains the images of the area applicable for gesture reading, a motion extraction unit that extracts the motion area of the image, and a hand gesture recognition unit that read the motion gestures of the extracted area. The proposed application of the motion gesture algorithm achieves 20% improvement compared to that of the current system.

동작 인식 게임의 융합 발전 방향 (A Study on Convergence Development Direction of Gesture Recognition Game)

  • 이면재
    • 한국융합학회논문지
    • /
    • 제5권4호
    • /
    • pp.1-7
    • /
    • 2014
  • 동작 인식은 동작을 인식하여 처리하는 기술로 사용자에게 편이성과 직관성을 제공한다. 이러한 장점 때문에 동작 인식 기술은 군사, 의료, 교육 등 여러 분야에 융합되어 응용되고 있다. 특히, 게임 분야에서 동작 인식은 실제 동작과 유사하게 플레이할 수 있다는 장점 때문에, 의료, 군사, 교육 등의 분야와 융합되어지고 있다. 본 논문은 이러한 배경을 바탕으로 동작 인식 게임의 융합 발전 방향을 논하기 위한 것이다. 이를 위하여 본 논문에서는 동작 인식 기술 현황과 게임을 살펴보고 동작 인식 게임의 문제점과 개선 방안을 기술한다. 본 논문은 국내 동작 인식게임의 융합 경쟁력을 향상시키는데 도움을 줄 수 있다.

The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV

  • Jo, Chun-Ik;Lim, Ji-Hyoun;Park, Jun
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.525-531
    • /
    • 2012
  • Objective: The aim of this study is to investigate learnability of gestures-commands pair when people use gestures to control a device. Background: In vision-based gesture recognition system, selecting gesture-command pairing is critical for its usability in learning. Subjective preference and its agreement score, used in previous study(Lim et al., 2012) was used to group four gesture-command pairings. To quantify the learnability, two learning models, average time model and marginal time model, were used. Method: Two sets of eight gestures, total sixteen gestures were listed by agreement score and preference data. Fourteen participants divided into two groups, memorized each set of gesture-command pair and performed gesture. For a given command, time to recall the paired gesture was collected. Results: The average recall time for initial trials were differed by preference and agreement score as well as the learning rate R driven by the two learning models. Conclusion: Preference rate agreement score showed influence on learning of gesture-command pairs. Application: This study could be applied to any device considered to adopt gesture interaction system for device control.

바디 제스처 인식을 위한 기초적 신체 모델 인코딩과 선택적 / 비동시적 입력을 갖는 병렬 상태 기계 (Primitive Body Model Encoding and Selective / Asynchronous Input-Parallel State Machine for Body Gesture Recognition)

  • 김주창;박정우;김우현;이원형;정명진
    • 로봇학회논문지
    • /
    • 제8권1호
    • /
    • pp.1-7
    • /
    • 2013
  • Body gesture Recognition has been one of the interested research field for Human-Robot Interaction(HRI). Most of the conventional body gesture recognition algorithms used Hidden Markov Model(HMM) for modeling gestures which have spatio-temporal variabilities. However, HMM-based algorithms have difficulties excluding meaningless gestures. Besides, it is necessary for conventional body gesture recognition algorithms to perform gesture segmentation first, then sends the extracted gesture to the HMM for gesture recognition. This separated system causes time delay between two continuing gestures to be recognized, and it makes the system inappropriate for continuous gesture recognition. To overcome these two limitations, this paper suggests primitive body model encoding, which performs spatio/temporal quantization of motions from human body model and encodes them into predefined primitive codes for each link of a body model, and Selective/Asynchronous Input-Parallel State machine(SAI-PSM) for multiple-simultaneous gesture recognition. The experimental results showed that the proposed gesture recognition system using primitive body model encoding and SAI-PSM can exclude meaningless gestures well from the continuous body model data, while performing multiple-simultaneous gesture recognition without losing recognition rates compared to the previous HMM-based work.

시 공간 정규화를 통한 딥 러닝 기반의 3D 제스처 인식 (Deep Learning Based 3D Gesture Recognition Using Spatio-Temporal Normalization)

  • 채지훈;강수명;김해성;이준재
    • 한국멀티미디어학회논문지
    • /
    • 제21권5호
    • /
    • pp.626-637
    • /
    • 2018
  • Human exchanges information not only through words, but also through body gesture or hand gesture. And they can be used to build effective interfaces in mobile, virtual reality, and augmented reality. The past 2D gesture recognition research had information loss caused by projecting 3D information in 2D. Since the recognition of the gesture in 3D is higher than 2D space in terms of recognition range, the complexity of gesture recognition increases. In this paper, we proposed a real-time gesture recognition deep learning model and application in 3D space using deep learning technique. First, in order to recognize the gesture in the 3D space, the data collection is performed using the unity game engine to construct and acquire data. Second, input vector normalization for learning 3D gesture recognition model is processed based on deep learning. Thirdly, the SELU(Scaled Exponential Linear Unit) function is applied to the neural network's active function for faster learning and better recognition performance. The proposed system is expected to be applicable to various fields such as rehabilitation cares, game applications, and virtual reality.

CNN-based Gesture Recognition using Motion History Image

  • Koh, Youjin;Kim, Taewon;Hong, Min;Choi, Yoo-Joo
    • 인터넷정보학회논문지
    • /
    • 제21권5호
    • /
    • pp.67-73
    • /
    • 2020
  • In this paper, we present a CNN-based gesture recognition approach which reduces the memory burden of input data. Most of the neural network-based gesture recognition methods have used a sequence of frame images as input data, which cause a memory burden problem. We use a motion history image in order to define a meaningful gesture. The motion history image is a grayscale image into which the temporal motion information is collapsed by synthesizing silhouette images of a user during the period of one meaningful gesture. In this paper, we first summarize the previous traditional approaches and neural network-based approaches for gesture recognition. Then we explain the data preprocessing procedure for making the motion history image and the neural network architecture with three convolution layers for recognizing the meaningful gestures. In the experiments, we trained five types of gestures, namely those for charging power, shooting left, shooting right, kicking left, and kicking right. The accuracy of gesture recognition was measured by adjusting the number of filters in each layer in the proposed network. We use a grayscale image with 240 × 320 resolution which defines one meaningful gesture and achieved a gesture recognition accuracy of 98.24%.

체감형 콘텐츠 개발을 위한 연속동작 매칭 방법에 관한 연구 (A Study on the Gesture Matching Method for the Development of Gesture Contents)

  • 이형구
    • 한국게임학회 논문지
    • /
    • 제13권6호
    • /
    • pp.75-84
    • /
    • 2013
  • 본 연구에서는 윈도우즈 PC용 연속동작 감지 카메라, Xtion을 이용한 PC-윈도우 플랫폼 기반의 연속동작 녹화 및 매칭방법의 개발 내용을 소개한다. 해당 방법을 개발하기 위해 카메라를 통해 얻은 깊이 정보, RGB 화상 정보, 뼈대 정보를 가공하고 비교하는 API를 먼저 개발하였다. 유효관절만을 선택적으로 비교하는 pose 비교 방법이 개발되었으며, 연속동작 비교에서는 pose 사이에 다른 틀린 pose가 섞여도 인식할 수 있는 방법이 개발되었다. 특정 pose나 연속동작 검출을 위해 샘플 데이터를 기록하고 테스트할 수 있는 도구가 개발되었다. 6종류의 다른 pose 및 연속동작을 촬영하고 테스트한 결과, pose는 100%의 인식성공과 연속동작은 99%의 인식성공이 이루어져 개발된 방법의 유용성을 검증할 수 있었다.

삼차원 핸드 제스쳐 디자인 및 모델링 프레임워크 (A Framework for 3D Hand Gesture Design and Modeling)

  • 권두영
    • 한국산학기술학회논문지
    • /
    • 제14권10호
    • /
    • pp.5169-5175
    • /
    • 2013
  • 본 논문에서는 삼차원 핸드 제스쳐 디자인 및 모델링을 위한 프레임워크를 기술한다. 동작 인식, 평가, 등록을 지원하기위해 동적시간정합(Dynamic Time Warping, 이하 DTW)과 은닉마코브모델 (Hidden Markov Mode, 이하 HMM)을 활용 하였다. HMM은 제스쳐 인식에 활용되며 또한 제스쳐 디자인과 등록 과정에 활용된다. DTW은 HMM 훈련 데이터가 부족한 경우 제스쳐 인식에 활용되고, 수행된 동작이 기준 동작의 차이를 평가하는 데에 활용된다. 동작 움직임에 나타나는 위치 정보와 관성 정보를 모두 획득하기 위해 바디센서와 시각센서를 혼합하여 동작을 감지하였다. 18개의 예제 손동작을 디자인하고 다양한 상황에서 제안된 기법을 테스트하였다. 또한 제스쳐 수행시 나타나는 사용자간 다양성에 대해 토론한다.

주거 공간에서의 3차원 핸드 제스처 인터페이스에 대한 사용자 요구사항 (User Needs of Three Dimensional Hand Gesture Interfaces in Residential Environment Based on Diary Method)

  • 정동영;김희진;한성호;이동훈
    • 대한산업공학회지
    • /
    • 제41권5호
    • /
    • pp.461-469
    • /
    • 2015
  • The aim of this study is to find out the user's needs of a 3D hand gesture interface in the smart home environment. To find out the users' needs, we investigated which object the users want to use with a 3D hand gesture interface and why they want to use a 3D hand gesture interface. 3D hand gesture interfaces are studied to be applied to various devices in the smart environment. 3D hand gesture interfaces enable the users to control the smart environment with natural and intuitive hand gestures. With these advantages, finding out the user's needs of a 3D hand gesture interface would improve the user experience of a product. This study was conducted using a diary method to find out the user's needs with 20 participants. They wrote the needs of a 3D hand gesture interface during one week filling in the forms of a diary. The form of the diary is comprised of who, when, where, what and how to use a 3D hand gesture interface with each consisting of a usefulness score. A total of 322 data (209 normal data and 113 error data) were collected from users. There were some common objects which the users wanted to control with a 3D hand gesture interface and reasons why they want to use a 3D hand gesture interface. Among them, the users wanted to use a 3D hand gesture interface mostly to control the light, and to use a 3D hand gesture interface mostly to overcome hand restrictions. The results of this study would help develop effective and efficient studies of a 3D hand gesture interface giving valuable insights for the researchers and designers. In addition, this could be used for creating guidelines for 3D hand gesture interfaces.

A Hand Gesture Recognition Method using Inertial Sensor for Rapid Operation on Embedded Device

  • Lee, Sangyub;Lee, Jaekyu;Cho, Hyeonjoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권2호
    • /
    • pp.757-770
    • /
    • 2020
  • We propose a hand gesture recognition method that is compatible with a head-up display (HUD) including small processing resource. For fast link adaptation with HUD, it is necessary to rapidly process gesture recognition and send the minimum amount of driver hand gesture data from the wearable device. Therefore, we use a method that recognizes each hand gesture with an inertial measurement unit (IMU) sensor based on revised correlation matching. The method of gesture recognition is executed by calculating the correlation between every axis of the acquired data set. By classifying pre-defined gesture values and actions, the proposed method enables rapid recognition. Furthermore, we evaluate the performance of the algorithm, which can be implanted within wearable bands, requiring a minimal process load. The experimental results evaluated the feasibility and effectiveness of our decomposed correlation matching method. Furthermore, we tested the proposed algorithm to confirm the effectiveness of the system using pre-defined gestures of specific motions with a wearable platform device. The experimental results validated the feasibility and effectiveness of the proposed hand gesture recognition system. Despite being based on a very simple concept, the proposed algorithm showed good performance in recognition accuracy.