• Title/Summary/Keyword: Gestures

Search Result 472, Processing Time 0.036 seconds

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

Emotion Recognition Method using Gestures and EEG Signals (제스처와 EEG 신호를 이용한 감정인식 방법)

  • Kim, Ho-Duck;Jung, Tae-Min;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.9
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

The Natural Way of Gestures for Interacting with Smart TV

  • Choi, Jin-Hae;Hong, Ji-Young
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.567-575
    • /
    • 2012
  • Objective: The aim of this study is to get an optimal mental model by investigating user's natural behavior for controlling smart TV by mid-air gestures and to identify which factor is most important for controlling behavior. Background: A lot of TV companies are trying to find simple controlling method for complex smart TV. Although plenty of gesture studies proposing they could get possible alternatives to resolve this pain-point, however, there is no fitted gesture work for smart TV market. So it is needed to find optimal gestures for it. Method: (1) Eliciting core control scene by in-house study. (2) Observe and analyse 20 users' natural behavior as types of hand-held devices and control scene. We also made taxonomies for gestures. Results: Users' are trying to do more manipulative gestures than symbolic gestures when they try to continuous control. Conclusion: The most natural way to control smart TV on the remote with gestures is give user a mental model grabbing and manipulating virtual objects in the mid-air. Application: The results of this work might help to make gesture interaction guidelines for smart TV.

The Difference of Gestures between Scientists and Middle School Students in Scientific Discourse: Focus on Molecular Movement and the Change in State of Material (과학담화에서 과학자와 중학생의 제스처 비교 -분자운동과 물질의 상태변화를 중심으로-)

  • Kim, Ji Hyeon;Cho, Hae Ree;Cho, Young Hoan;Jeong, Dae Hong
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.2
    • /
    • pp.273-291
    • /
    • 2018
  • Gestures accompanied by scientific discourses play an important role in constructing mental models and making model-based inferences. According to embodied cognition literature, gestures can be a source of recognition of the mental models of students and help them in changing naive beliefs about science. This study intends to compare the gestures of scientists with that of middle school students in explaining scientific phenomena and to explore the relationship between gestures and scientific discourse. In the study, 10 scientists and 10 middle school students participated in clinical interviews and the tests of knowledge and self-efficacy. Participants engaged in one-on-one clinical interviews with semi-structured questions about three tasks regarding the molecular movement and the state change of matter. Four researchers carried out open coding and applied a constant comparison method in order to analyze video-recorded gestures. This study found four themes (feature of gesture, use of gesture, content of gesture, function of gesture) about the differences of gestures between scientists and middle school students. Scientists used more diverse and elaborate gestures systematically and frequently in the interview. Although students used gestures in their scientific talk and reasoning, the gestures of students were not well grounded on scientific knowledge and had different functions from those of scientists. The findings revealed that gestures can represent underlying cognition and strengthen scientific thinking. We should encourage students to use gestures as a tool to understand scientific concepts and make inferences.

Implementation of new gestures on the Multi-touch table

  • Park, Sang Bong;Kim, Beom jin
    • International Journal of Advanced Culture Technology
    • /
    • v.1 no.1
    • /
    • pp.15-18
    • /
    • 2013
  • This paper describes new gestures on the Multi-touch table. The 2 new gestures with 3 fingers are used for minimizing of all windows that is already open and converting Aero mode. We also implement a FTIR (Frustrated Total Internal Reflection) Multi-touch table that consists of sheet of acrylic, infrared LEDs, camera and rear projector. The operation of proposed gestures is verified on the implemented Multi-touch table.

  • PDF

Study on Forearm Muscles and Electrode Placements for CNN based Korean Finger Number Gesture Recognition using sEMG Signals (표면근전도 신호를 활용한 CNN 기반 한국 지화숫자 인식을 위한 아래팔 근육과 전극 위치에 관한 연구)

  • Park, Jong-Jun;Kwon, Chun-Ki
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.8
    • /
    • pp.260-267
    • /
    • 2018
  • Surface electromyography (sEMG) is mainly used as an on/off switch in the early stage of the study and was then expanded to navigational control of powered-wheelchairs and recognition of sign language or finger gestures. There are difficulties in communication between people who know and do not know sign language; therefore, many efforts have been made to recognize sign language or finger gestures. Recently, use of sEMG signals to recognize sign language signals have been investigated; however, most studies of this topic conducted to date have focused on Chinese finger number gestures. Since sign language and finger gestures vary among regions, Korean- and Chinese-finger number gestures differ from each other. Accordingly, the recognition performance of Korean finger number gestures based on sEMG signals can be severely degraded if the same muscles are specified as for Chinese finger number gestures. However, few studies of Korean finger number gestures based on sEMG signals have been conducted. Thus, this study was conducted to identify potential forearm muscles from which to collect sEMG signals for Korean finger number gestures. To accomplish this, six Korean finger number gestures from number zero to five were investigated to determine the usefulness of the proposed muscles and electrode placements by showing that CNN technique based on sEMG signal after sufficient learning recognizes six Korean finger number gestures in accuracy of 100%.

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

Ability of children to perform touchscreen gestures and follow prompting techniques when using mobile apps

  • Yadav, Savita;Chakraborty, Pinaki;Kaul, Arshia;Pooja, Pooja;Gupta, Bhavya;Garg, Anchal
    • Clinical and Experimental Pediatrics
    • /
    • v.63 no.6
    • /
    • pp.232-236
    • /
    • 2020
  • Background: Children today get access to smartphones at an early age. However, their ability to use mobile apps has not yet been studied in detail. Purpose: This study aimed to assess the ability of children aged 2-8 years to perform touchscreen gestures and follow prompting techniques, i.e., ways apps provide instructions on how to use them. Methods: We developed one mobile app to test the ability of children to perform various touchscreen gestures and another mobile app to test their ability to follow various prompting techniques. We used these apps in this study of 90 children in a kindergarten and a primary school in New Delhi in July 2019. We noted the touchscreen gestures that the children could perform and the most sophisticated prompting technique that they could follow. Results: Two- and 3-year-old children could not follow any prompting technique and only a minority (27%) could tap the touchscreen at an intended place. Four- to 6-year-old children could perform simple gestures like a tap and slide (57%) and follow instructions provided through animation (63%). Seven- and 8-year-old children could perform more sophisticated gestures like dragging and dropping (30%) and follow instructions provided in audio and video formats (34%). We observed a significant difference between the number of touchscreen gestures that the children could perform and the number of prompting techniques that they could follow (F=544.0407, P<0.05). No significant difference was observed in the performance of female versus male children (P>0.05). Conclusion: Children gradually learn to use mobile apps beginning at 2 years of age. They become comfortable performing single-finger gestures and following nontextual prompting techniques by 8 years of age. We recommend that these results be considered in the development of mobile apps for children.

The Development of Gesture in the Early Communication of Korean Infants (한국 영아의 초기 의사소통 : 몸짓의 발달)

  • Chang-Song, You-Kyung;Choi, Yun-Young;Kim, So-Yeun
    • Korean Journal of Child Studies
    • /
    • v.26 no.1
    • /
    • pp.155-167
    • /
    • 2005
  • Korean infants' use of gesture was examined with 45 10-to 17-month olds. The mothers of infants were asked to check each word in the MacArthur Communicative Development Inventory-Korean (MCDI-K) vocabulary checklist if their infant had a gesture for a given word and to indicate what kind of early communicative behavior she showed in 5 different situations. The results show that infants in this study have 11 gestures, of which many are learned within the context of routines or games. Referential gestures were rarely reported. There was no positive correlation between the number of gestures and the number of expressive words. However, more qualitative measures on early communicative behaviors show that there was a positive correlation between "frequent use of gestures" and "try to communicate by verbal means".

  • PDF

A Notation Method for Three Dimensional Hand Gesture

  • Choi, Eun-Jung;Kim, Hee-Jin;Chung, Min-K.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.541-550
    • /
    • 2012
  • Objective: The aim of this study is to suggest a notation method for three-dimensional hand gesture. Background: To match intuitive gestures with commands of products, various studies have tried to derive gestures from users. In this case, various gestures for a command are derived due to various users' experience. Thus, organizing the gestures systematically and identifying similar pattern of them have become one of important issues. Method: Related studies about gesture taxonomy and notating sign language were investigated. Results: Through the literature review, a total of five elements of static gesture were selected, and a total of three forms of dynamic gesture were identified. Also temporal variability(reputation) was additionally selected. Conclusion: A notation method which follows a combination sequence of the gesture elements was suggested. Application: A notation method for three dimensional hand gestures might be used to describe and organize the user-defined gesture systematically.