• Title/Summary/Keyword: Gestures

Search Result 472, Processing Time 0.022 seconds

Formant Trajectories of English Vowels Produced by American Children (미국인 아동이 발음한 영어모음의 포먼트 궤적)

  • Yang, Byung-Gon
    • Phonetics and Speech Sciences
    • /
    • v.3 no.1
    • /
    • pp.23-34
    • /
    • 2011
  • Many Korean children have difficulty learning English vowels. The gestures inside the oral and pharyngeal cavities are hard to control when they cannot see the gestures and the target vowel system is quite different from that of Korean. This study attempts to collect children's acoustic data of twelve English vowels published by Hillenbrand et al. (1995) online and to examine the acoustic features of English vowels for phoneticians and English teachers. The author used Praat to obtain the data systematically at six equidistant timepoints over the vowel segment avoiding any obvious errors. Results show inherent acoustic properties for vowels from the children's distribution of vowel duration, f0 and intensity values. Second, children's gestures for each vowel coincide with the regression analysis of all formant values at different timepoints regardless of the vocal fold and tract difference. Third, locus points appear higher than those of American males and females. Their gestures along the timepoints display almost similar patterns. From the results the author concludes that vowel formant trajectories provide useful and important information on dynamic articulatory gestures, which may be applicable to Korean children's education and correction of English vowels. Further studies on the developmental study of vowel formants and pitch values are desirable.

  • PDF

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

How Well Did We Know About Our Communication? "Origins of Human Communication"

  • Jung-Woo Son
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.34 no.1
    • /
    • pp.57-58
    • /
    • 2023
  • Through accurate observation and the results of experimental studies using great apes, the author tells us exactly what we have not known about human communication. The author persuasively conveys to the reader the grand history of developing from great apes' gestures to human gestures, to human speech. Given that great apes and human gestures were the origin of human voice language, we have once again realized that our language is, after all, an "embodied language."

HOG-HOD Algorithm for Recognition of Multi-cultural Hand Gestures (다문화 손동작 인식을 위한 HOG-HOD 알고리즘)

  • Kim, Jiye;Park, Jong-Il
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1187-1199
    • /
    • 2017
  • In recent years, research about Natural User Interface (NUI) has become focused because NUI system can give natural feelings for users in virtual reality. Most important thing in NUI system is how to communicate with the computer system. There are many things to interact with users such as speech, hand gestures, body actions. Among them, hand gesture is suitable for the purpose of NUI because people often use a relatively high frequency in daily life and hand gesture have meaning only by itself. This hand gestures called multi-cultural hand gesture and we proposed the method to recognize this kind of hand gestures. Proposed method is composed of Histogram of Oriented Gradients (HOG) used for hand shape recognition and Histogram of Oriented Displacements (HOD) used for hand center point trajectory recognition.

Recognition of 3D hand gestures using partially tuned composite hidden Markov models

  • Kim, In Cheol
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.236-240
    • /
    • 2004
  • Stroke-based composite HMMs with articulation states are proposed to deal with 3D spatio-temporal trajectory gestures. The direct use of 3D data provides more naturalness in generating gestures, thereby avoiding some of the constraints usually imposed to prevent performance degradation when trajectory data are projected into a specific 2D plane. Also, the decomposition of gestures into more primitive strokes is quite attractive, since reversely concatenating stroke-based HMMs makes it possible to construct a new set of gesture HMMs without retraining their parameters. Any deterioration in performance arising from decomposition can be remedied by a partial tuning process for such composite HMMs.

Context-sensitive lingual gestures in the Korean tap /r/

  • Kim, Dae-Won
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.11-19
    • /
    • 2000
  • The present electropalatographic study reports the production of the allophones. i.e., [l] and [r], of Korean tap /r/ and their coarticulatory characteristics in /$C{\'{a}}r#g$/ and /$C{\'{a}}r#i$/ sequences. The finding that tap /r/ involves a complete oral closure with less lingual contact, i.e., apico-frontalveolar coupling. than lateralized /r/ which involves apico-bladealveolar coupling and tongue dorsum lowering for adequate airflow through either side and/or both of the tongue body suggests that the two allophones of the tap /r/ have different lingual gestures. Moreover. in comparison with the tap. the lateral exerts longer lingual contacts. The mean ratio between them is 3.7. In the sequences /Car#g/. the two adjacent antagonistic segments (i.e., /r/ and /g/) show mutual coarticulation effects taking on features of adjacent segment. but either of them is precisely constrained without blocking the formation of involved major lingual gestures for the other segment. In sequences /Car#i/ occurs anticipatory V-to-C coarticulation but not vocalic carryover effects. In both sequences. the allophones reveal insignificant wordinitial consonantal carryover coarticulatory effects and insignificant speaker-specific lingual contacts.

  • PDF

Recognizing Hand Digit Gestures Using Stochastic Models

  • Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.807-815
    • /
    • 2008
  • A simple efficient method of spotting and recognizing hand gestures in video is presented using a network of hidden Markov models and dynamic programming search algorithm. The description starts from designing a set of isolated trajectory models which are stochastic and robust enough to characterize highly variable patterns like human motion, handwriting, and speech. Those models are interconnected to form a single big network termed a spotting network or a spotter that models a continuous stream of gestures and non-gestures as well. The inference over the model is based on dynamic programming. The proposed model is highly efficient and can readily be extended to a variety of recurrent pattern recognition tasks. The test result without any engineering has shown the potential for practical application. At the end of the paper we add some related experimental result that has been obtained using a different model - dynamic Bayesian network - which is also a type of stochastic model.

  • PDF

Hybrid HMM for Transitional Gesture Classification in Thai Sign Language Translation

  • Jaruwanawat, Arunee;Chotikakamthorn, Nopporn;Werapan, Worawit
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1106-1110
    • /
    • 2004
  • A human sign language is generally composed of both static and dynamic gestures. Each gesture is represented by a hand shape, its position, and hand movement (for a dynamic gesture). One of the problems found in automated sign language translation is on segmenting a hand movement that is part of a transitional movement from one hand gesture to another. This transitional gesture conveys no meaning, but serves as a connecting period between two consecutive gestures. Based on the observation that many dynamic gestures as appeared in Thai sign language dictionary are of quasi-periodic nature, a method was developed to differentiate between a (meaningful) dynamic gesture and a transitional movement. However, there are some meaningful dynamic gestures that are of non-periodic nature. Those gestures cannot be distinguished from a transitional movement by using the signal quasi-periodicity. This paper proposes a hybrid method using a combination of the periodicity-based gesture segmentation method with a HMM-based gesture classifier. The HMM classifier is used here to detect dynamic signs of non-periodic nature. Combined with the periodic-based gesture segmentation method, this hybrid scheme can be used to identify segments of a transitional movement. In addition, due to the use of quasi-periodic nature of many dynamic sign gestures, dimensionality of the HMM part of the proposed method is significantly reduced, resulting in computational saving as compared with a standard HMM-based method. Through experiment with real measurement, the proposed method's recognition performance is reported.

  • PDF

Augmented Reality Game Interface Using Hand Gestures Tracking (사용자 손동작 추적에 기반한 증강현실 게임 인터페이스)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.3-12
    • /
    • 2006
  • Recently, Many 3D augmented reality games that provide strengthened immersive have appeared in the 3D game environment. In this article, we describe a barehanded interaction method based on human hand gestures for augmented reality games. First, feature points are extracted from input video streams. Point features are tracked and motion of moving objects are computed. The shape of the motion trajectories are used to determine whether the motion is intended gestures. A long smooth trajectory toward one of virtual objects or menus is classified as an intended gesture and the corresponding action is invoked. To prove the validity of the proposed method, we implemented two simple augmented reality applications: a gesture-based music player and a virtual basketball game. In the music player, several menu icons are displayed on the top of the screen and an user can activate a menu by hand gestures. In the virtual basketball game, a virtual ball is bouncing in a virtual cube space and the real video stream is shown in the background. An user can hit the virtual ball with his hand gestures. From the experiments for three untrained users, it is shown that the accuracy of menu activation according to the intended gestures is 94% for normal speed gestures and 84% for fast and abrupt gestures.

  • PDF

Hand Gesture Segmentation Method using a Wrist-Worn Wearable Device

  • Lee, Dong-Woo;Son, Yong-Ki;Kim, Bae-Sun;Kim, Minkyu;Jeong, Hyun-Tae;Cho, Il-Yeon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.541-548
    • /
    • 2015
  • Objective: We introduce a hand gesture segmentation method using a wrist-worn wearable device which can recognize simple gestures of clenching and unclenching ones' fist. Background: There are many types of smart watches and fitness bands in the markets. And most of them already adopt a gesture interaction to provide ease of use. However, there are many cases in which the malfunction is difficult to distinguish between the user's gesture commands and user's daily life motion. It is needed to develop a simple and clear gesture segmentation method to improve the gesture interaction performance. Method: At first, we defined the gestures of making a fist (start of gesture command) and opening one's fist (end of gesture command) as segmentation gestures to distinguish a gesture. The gestures of clenching and unclenching one's fist are simple and intuitive. And we also designed a single gesture consisting of a set of making a fist, a command gesture, and opening one's fist in order. To detect segmentation gestures at the bottom of the wrist, we used a wrist strap on which an array of infrared sensors (emitters and receivers) were mounted. When a user takes gestures of making a fist and opening one's a fist, this changes the shape of the bottom of the wrist, and simultaneously changes the reflected amount of the infrared light detected by the receiver sensor. Results: An experiment was conducted in order to evaluate gesture segmentation performance. 12 participants took part in the experiment: 10 males, and 2 females with an average age of 38. The recognition rates of the segmentation gestures, clenching and unclenching one's fist, are 99.58% and 100%, respectively. Conclusion: Through the experiment, we have evaluated gesture segmentation performance and its usability. The experimental results show a potential for our suggested segmentation method in the future. Application: The results of this study can be used to develop guidelines to prevent injury in auto workers at mission assembly plants.