• Title/Summary/Keyword: Gesture application

Search Result 110, Processing Time 0.021 seconds

Conditions of Applications, Situations and Functions Applicable to Gesture Interface

  • Ryu, Tae-Beum;Lee, Jae-Hong;Song, Joo-Bong;Yun, Myung-Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.507-513
    • /
    • 2012
  • Objective: This study developed a hierarchy of conditions of applications(devices), situations and functions which are applicable to gesture interface. Background: Gesture interface is one of the promising interfaces for our natural and intuitive interaction with intelligent machines and environments. Although there were many studies related to developing new gesture-based devices and gesture interfaces, it was little known which applications, situations and functions are applicable to gesture interface. Method: This study searched about 120 papers relevant to designing and applying gesture interfaces and vocabulary to find the gesture applicable conditions of applications, situations and functions. The conditions which were extracted from 16 closely-related papers were rearranged, and a hierarchy of them was developed to evaluate the applicability of applications, situations and functions to gesture interface. Results: This study summarized 10, 10 and 6 conditions of applications, situations and functions, respectively. In addition, the gesture applicable condition hierarchy of applications, situation and functions were developed based on the semantic similarity, ordering and serial or parallel relationship among them. Conclusion: This study collected gesture applicable conditions of application, situation and functions, and a hierarchy of them was developed to evaluate the applicability of gesture interface. Application: The gesture applicable conditions and hierarchy can be used in developing a framework and detailed criteria to evaluate applicability of applications situations and functions. Moreover, it can enable for designers of gesture interface and vocabulary to determine applications, situations and functions which are applicable to gesture interface.

Deep Learning Based 3D Gesture Recognition Using Spatio-Temporal Normalization (시 공간 정규화를 통한 딥 러닝 기반의 3D 제스처 인식)

  • Chae, Ji Hun;Gang, Su Myung;Kim, Hae Sung;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.626-637
    • /
    • 2018
  • Human exchanges information not only through words, but also through body gesture or hand gesture. And they can be used to build effective interfaces in mobile, virtual reality, and augmented reality. The past 2D gesture recognition research had information loss caused by projecting 3D information in 2D. Since the recognition of the gesture in 3D is higher than 2D space in terms of recognition range, the complexity of gesture recognition increases. In this paper, we proposed a real-time gesture recognition deep learning model and application in 3D space using deep learning technique. First, in order to recognize the gesture in the 3D space, the data collection is performed using the unity game engine to construct and acquire data. Second, input vector normalization for learning 3D gesture recognition model is processed based on deep learning. Thirdly, the SELU(Scaled Exponential Linear Unit) function is applied to the neural network's active function for faster learning and better recognition performance. The proposed system is expected to be applicable to various fields such as rehabilitation cares, game applications, and virtual reality.

A Notation Method for Three Dimensional Hand Gesture

  • Choi, Eun-Jung;Kim, Hee-Jin;Chung, Min-K.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.541-550
    • /
    • 2012
  • Objective: The aim of this study is to suggest a notation method for three-dimensional hand gesture. Background: To match intuitive gestures with commands of products, various studies have tried to derive gestures from users. In this case, various gestures for a command are derived due to various users' experience. Thus, organizing the gestures systematically and identifying similar pattern of them have become one of important issues. Method: Related studies about gesture taxonomy and notating sign language were investigated. Results: Through the literature review, a total of five elements of static gesture were selected, and a total of three forms of dynamic gesture were identified. Also temporal variability(reputation) was additionally selected. Conclusion: A notation method which follows a combination sequence of the gesture elements was suggested. Application: A notation method for three dimensional hand gestures might be used to describe and organize the user-defined gesture systematically.

Three Dimensional Hand Gesture Taxonomy for Commands

  • Choi, Eun-Jung;Lee, Dong-Hun;Chung, Min-K.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.483-492
    • /
    • 2012
  • Objective: The aim of this study is to suggest three-dimensional(3D) hand gesture taxonomy to organize the user's intention of his/her decisions on deriving a certain gesture systematically. Background: With advanced technologies of gesture recognition, various researchers have studied to focus on deriving intuitive gestures for commands from users. In most of the previous studies, the users' reasons for deriving a certain gesture for a command were only used as a reference to group various gestures. Method: A total of eleven studies which categorized gestures accompanied by speech were investigated. Also a case study with thirty participants was conducted to understand gesture-features which derived from the users specifically. Results: Through the literature review, a total of nine gesture-features were extracted. After conducting the case study, the nine gesture-features were narrowed down a total of seven gesture-features. Conclusion: Three-dimensional hand gesture taxonomy including a total of seven gesture-features was developed. Application: Three-dimensional hand gesture taxonomy might be used as a check list to understand the users' reasons.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

Proposal of Camera Gesture Recognition System Using Motion Recognition Algorithm

  • Moon, Yu-Sung;Kim, Jung-Won
    • Journal of IKEEE
    • /
    • v.26 no.1
    • /
    • pp.133-136
    • /
    • 2022
  • This paper is about motion gesture recognition system, and proposes the following improvement to the flaws of the current system: a motion gesture recognition system and such algorithm that uses the video image of the entire hand and reading its motion gesture to advance the accuracy of recognition. The motion gesture recognition system includes, an image capturing unit that captures and obtains the images of the area applicable for gesture reading, a motion extraction unit that extracts the motion area of the image, and a hand gesture recognition unit that read the motion gestures of the extracted area. The proposed application of the motion gesture algorithm achieves 20% improvement compared to that of the current system.

Dynamic gesture recognition using a model-based temporal self-similarity and its application to taebo gesture recognition

  • Lee, Kyoung-Mi;Won, Hey-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.11
    • /
    • pp.2824-2838
    • /
    • 2013
  • There has been a lot of attention paid recently to analyze dynamic human gestures that vary over time. Most attention to dynamic gestures concerns with spatio-temporal features, as compared to analyzing each frame of gestures separately. For accurate dynamic gesture recognition, motion feature extraction algorithms need to find representative features that uniquely identify time-varying gestures. This paper proposes a new feature-extraction algorithm using temporal self-similarity based on a hierarchical human model. Because a conventional temporal self-similarity method computes a whole movement among the continuous frames, the conventional temporal self-similarity method cannot recognize different gestures with the same amount of movement. The proposed model-based temporal self-similarity method groups body parts of a hierarchical model into several sets and calculates movements for each set. While recognition results can depend on how the sets are made, the best way to find optimal sets is to separate frequently used body parts from less-used body parts. Then, we apply a multiclass support vector machine whose optimization algorithm is based on structural support vector machines. In this paper, the effectiveness of the proposed feature extraction algorithm is demonstrated in an application for taebo gesture recognition. We show that the model-based temporal self-similarity method can overcome the shortcomings of the conventional temporal self-similarity method and the recognition results of the model-based method are superior to that of the conventional method.

Dynamic Gesture Recognition using SVM and its Application to an Interactive Storybook (SVM을 이용한 동적 동작인식: 체감형 동화에 적용)

  • Lee, Kyoung-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.4
    • /
    • pp.64-72
    • /
    • 2013
  • This paper proposes a dynamic gesture recognition algorithm using SVM(Support Vector Machine) which is suitable for multi-dimension classification. First of all, the proposed algorithm locates the beginning and end of the gestures on the video frames at the Kinect camera, spots meaningful gesture frames, and normalizes the number of frames. Then, for gesture recognition, the algorithm extracts gesture features using body parts' positions and relations among the parts based on the human model from the normalized frames. C-SVM for each dynamic gesture is trained using training data which consists of positive data and negative data. The final gesture is chosen with the largest value of C-SVM values. The proposed gesture recognition algorithm can be applied to the interactive storybook as gesture interface.

The Effect of Gesture-Command Pairing Condition on Learnability when Interacting with TV

  • Jo, Chun-Ik;Lim, Ji-Hyoun;Park, Jun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.525-531
    • /
    • 2012
  • Objective: The aim of this study is to investigate learnability of gestures-commands pair when people use gestures to control a device. Background: In vision-based gesture recognition system, selecting gesture-command pairing is critical for its usability in learning. Subjective preference and its agreement score, used in previous study(Lim et al., 2012) was used to group four gesture-command pairings. To quantify the learnability, two learning models, average time model and marginal time model, were used. Method: Two sets of eight gestures, total sixteen gestures were listed by agreement score and preference data. Fourteen participants divided into two groups, memorized each set of gesture-command pair and performed gesture. For a given command, time to recall the paired gesture was collected. Results: The average recall time for initial trials were differed by preference and agreement score as well as the learning rate R driven by the two learning models. Conclusion: Preference rate agreement score showed influence on learning of gesture-command pairs. Application: This study could be applied to any device considered to adopt gesture interaction system for device control.

Design of Contactless Gesture-based Rhythm Action Game Interface for Smart Mobile Devices

  • Ju, Da-Young
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.585-591
    • /
    • 2012
  • Objective: The aim of this study is to propose the contactless gesture-based interface on smart mobile devices for especially rhythm action games. Background: Most existing approaches about interactions of smart mobile games are tab on the touch screen. However that way is such undesirable for someone or for sometimes, because of the disabled person, or the inconvenience that users need to touch/tab specific devices. Moreover more importantly, new interaction can derive new possibilities from stranded game genre. Method: In this paper, I present a smart mobile game with contactless gesture-based interaction and the interfaces using computer vision technology. Discovering the gestures which are easy to recognize and research of interaction system that fits to game on smart mobile device are conducted as previous studies. A combination between augmented reality technique and contactless gesture interaction is also tried. Results: The rhythm game allows a user to interact with smart mobile devices using hand gestures, without touching or tabbing the screen. Moreover users can feel fun in the game as other games. Conclusion: Evaluation results show that users make low failure numbers, and the game is able to recognize gestures with quite high precision in real time. Therefore the contactless gesture-based interaction has potentials to smart mobile game. Application: The results are applied to the commercial game application.