• Title/Summary/Keyword: gestures

Search Result 483, Processing Time 0.022 seconds

Improvement of Gesture Recognition using 2-stage HMM (2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구)

  • Jung, Hwon-Jae;Park, Hyeonjun;Kim, Donghan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

Formant Trajectories of English Vowels Produced by American Children (미국인 아동이 발음한 영어모음의 포먼트 궤적)

  • Yang, Byung-Gon
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.23-34
    • /
    • 2011
  • Many Korean children have difficulty learning English vowels. The gestures inside the oral and pharyngeal cavities are hard to control when they cannot see the gestures and the target vowel system is quite different from that of Korean. This study attempts to collect children's acoustic data of twelve English vowels published by Hillenbrand et al. (1995) online and to examine the acoustic features of English vowels for phoneticians and English teachers. The author used Praat to obtain the data systematically at six equidistant timepoints over the vowel segment avoiding any obvious errors. Results show inherent acoustic properties for vowels from the children's distribution of vowel duration, f0 and intensity values. Second, children's gestures for each vowel coincide with the regression analysis of all formant values at different timepoints regardless of the vocal fold and tract difference. Third, locus points appear higher than those of American males and females. Their gestures along the timepoints display almost similar patterns. From the results the author concludes that vowel formant trajectories provide useful and important information on dynamic articulatory gestures, which may be applicable to Korean children's education and correction of English vowels. Further studies on the developmental study of vowel formants and pitch values are desirable.

  • PDF

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

How Well Did We Know About Our Communication? "Origins of Human Communication"

  • Jung-Woo Son
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.34 no.1
    • /
    • pp.57-58
    • /
    • 2023
  • Through accurate observation and the results of experimental studies using great apes, the author tells us exactly what we have not known about human communication. The author persuasively conveys to the reader the grand history of developing from great apes' gestures to human gestures, to human speech. Given that great apes and human gestures were the origin of human voice language, we have once again realized that our language is, after all, an "embodied language."

HOG-HOD Algorithm for Recognition of Multi-cultural Hand Gestures (다문화 손동작 인식을 위한 HOG-HOD 알고리즘)

  • Kim, Jiye;Park, Jong-Il
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1187-1199
    • /
    • 2017
  • In recent years, research about Natural User Interface (NUI) has become focused because NUI system can give natural feelings for users in virtual reality. Most important thing in NUI system is how to communicate with the computer system. There are many things to interact with users such as speech, hand gestures, body actions. Among them, hand gesture is suitable for the purpose of NUI because people often use a relatively high frequency in daily life and hand gesture have meaning only by itself. This hand gestures called multi-cultural hand gesture and we proposed the method to recognize this kind of hand gestures. Proposed method is composed of Histogram of Oriented Gradients (HOG) used for hand shape recognition and Histogram of Oriented Displacements (HOD) used for hand center point trajectory recognition.

Recognition of 3D hand gestures using partially tuned composite hidden Markov models

  • Kim, In Cheol
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.236-240
    • /
    • 2004
  • Stroke-based composite HMMs with articulation states are proposed to deal with 3D spatio-temporal trajectory gestures. The direct use of 3D data provides more naturalness in generating gestures, thereby avoiding some of the constraints usually imposed to prevent performance degradation when trajectory data are projected into a specific 2D plane. Also, the decomposition of gestures into more primitive strokes is quite attractive, since reversely concatenating stroke-based HMMs makes it possible to construct a new set of gesture HMMs without retraining their parameters. Any deterioration in performance arising from decomposition can be remedied by a partial tuning process for such composite HMMs.

Context-sensitive lingual gestures in the Korean tap /r/

  • Kim, Dae-Won
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.11-19
    • /
    • 2000
  • The present electropalatographic study reports the production of the allophones. i.e., [l] and [r], of Korean tap /r/ and their coarticulatory characteristics in /$C{\'{a}}r#g$/ and /$C{\'{a}}r#i$/ sequences. The finding that tap /r/ involves a complete oral closure with less lingual contact, i.e., apico-frontalveolar coupling. than lateralized /r/ which involves apico-bladealveolar coupling and tongue dorsum lowering for adequate airflow through either side and/or both of the tongue body suggests that the two allophones of the tap /r/ have different lingual gestures. Moreover. in comparison with the tap. the lateral exerts longer lingual contacts. The mean ratio between them is 3.7. In the sequences /Car#g/. the two adjacent antagonistic segments (i.e., /r/ and /g/) show mutual coarticulation effects taking on features of adjacent segment. but either of them is precisely constrained without blocking the formation of involved major lingual gestures for the other segment. In sequences /Car#i/ occurs anticipatory V-to-C coarticulation but not vocalic carryover effects. In both sequences. the allophones reveal insignificant wordinitial consonantal carryover coarticulatory effects and insignificant speaker-specific lingual contacts.

  • PDF

Recognizing Hand Digit Gestures Using Stochastic Models

  • Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.807-815
    • /
    • 2008
  • A simple efficient method of spotting and recognizing hand gestures in video is presented using a network of hidden Markov models and dynamic programming search algorithm. The description starts from designing a set of isolated trajectory models which are stochastic and robust enough to characterize highly variable patterns like human motion, handwriting, and speech. Those models are interconnected to form a single big network termed a spotting network or a spotter that models a continuous stream of gestures and non-gestures as well. The inference over the model is based on dynamic programming. The proposed model is highly efficient and can readily be extended to a variety of recurrent pattern recognition tasks. The test result without any engineering has shown the potential for practical application. At the end of the paper we add some related experimental result that has been obtained using a different model - dynamic Bayesian network - which is also a type of stochastic model.

  • PDF

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

Augmented Reality Game Interface Using Hand Gestures Tracking (사용자 손동작 추적에 기반한 증강현실 게임 인터페이스)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.3-12
    • /
    • 2006
  • Recently, Many 3D augmented reality games that provide strengthened immersive have appeared in the 3D game environment. In this article, we describe a barehanded interaction method based on human hand gestures for augmented reality games. First, feature points are extracted from input video streams. Point features are tracked and motion of moving objects are computed. The shape of the motion trajectories are used to determine whether the motion is intended gestures. A long smooth trajectory toward one of virtual objects or menus is classified as an intended gesture and the corresponding action is invoked. To prove the validity of the proposed method, we implemented two simple augmented reality applications: a gesture-based music player and a virtual basketball game. In the music player, several menu icons are displayed on the top of the screen and an user can activate a menu by hand gestures. In the virtual basketball game, a virtual ball is bouncing in a virtual cube space and the real video stream is shown in the background. An user can hit the virtual ball with his hand gestures. From the experiments for three untrained users, it is shown that the accuracy of menu activation according to the intended gestures is 94% for normal speed gestures and 84% for fast and abrupt gestures.

  • PDF