• 제목/요약/키워드: Gestures

검색결과 472건 처리시간 0.032초

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • 대한인간공학회지
    • /
    • 제34권5호
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

제스처와 EEG 신호를 이용한 감정인식 방법 (Emotion Recognition Method using Gestures and EEG Signals)

  • 김호덕;정태민;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

The Natural Way of Gestures for Interacting with Smart TV

  • Choi, Jin-Hae;Hong, Ji-Young
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.567-575
    • /
    • 2012
  • Objective: The aim of this study is to get an optimal mental model by investigating user's natural behavior for controlling smart TV by mid-air gestures and to identify which factor is most important for controlling behavior. Background: A lot of TV companies are trying to find simple controlling method for complex smart TV. Although plenty of gesture studies proposing they could get possible alternatives to resolve this pain-point, however, there is no fitted gesture work for smart TV market. So it is needed to find optimal gestures for it. Method: (1) Eliciting core control scene by in-house study. (2) Observe and analyse 20 users' natural behavior as types of hand-held devices and control scene. We also made taxonomies for gestures. Results: Users' are trying to do more manipulative gestures than symbolic gestures when they try to continuous control. Conclusion: The most natural way to control smart TV on the remote with gestures is give user a mental model grabbing and manipulating virtual objects in the mid-air. Application: The results of this work might help to make gesture interaction guidelines for smart TV.

과학담화에서 과학자와 중학생의 제스처 비교 -분자운동과 물질의 상태변화를 중심으로- (The Difference of Gestures between Scientists and Middle School Students in Scientific Discourse: Focus on Molecular Movement and the Change in State of Material)

  • 김지현;조해리;조영환;정대홍
    • 한국과학교육학회지
    • /
    • 제38권2호
    • /
    • pp.273-291
    • /
    • 2018
  • 과학 담화에 동반되는 제스처는 정신 모형의 구성과 모델에 기반한 추론에 중요한 역할을 한다. 체화된 인지의 관점에서 제스처는 학생의 내면에 기저한 정신모형을 유추하는 근거인 동시에 학생의 불완전한 과학적인 사고의 변화에 도움을 줄 수 있다. 이 연구는 과학 교육의 맥락에서 제스처의 역할을 탐색해보고자 과학적 담화에서 제스처의 특징을 살펴보고 과학자와 중학생의 제스처를 비교하였다. 각각 10명의 과학자와 중학생이 본 연구에 참여했으며, 일대일 면담에서는 '분자운동과 상태 변화'에 관한 세 가지 면담과제가 제시되었고 면담은 반구조화된 clinical interview의 방식으로 진행되었다. 과학자와 중학생의 제스처는 모두 비디오로 녹화하였으며, 근거 이론(grounded theory)에 기반하여 4명의 연구자가 반복적으로 비교 분석하였다. 연구 결과, 과학자와 중학생의 제스처는 4가지 측면(제스처의 특징, 제스처의 사용, 제스처의 내용, 제스처의 기능)에서 상이한 특성을 보였다. 과학자는 다양하고 정교한 제스처를 보다 빈번하게 체계적으로 사용했으며, 중학생 또한 과학적 사고와 소통의 도구로 제스처를 과학담화에서 사용했으나, 중학생의 제스처는 과학자의 제스처에 비해 과학적인 근거가 부족했으며 기능적인 특징에서도 상당한 차이점이 있었다. 이 결과는 제스처가 과학적 사고를 강화하는 데 도움이 될 수 있으며, 내면에서 일어나는 인지활동을 알아보는 수단이 될 수 있다는 것을 보여준다. 앞으로 학생들이 제스처를 과학 개념의 이해와 추론을 돕는 도구로 사용할 수 있도록 지원할 필요가 있다.

Implementation of new gestures on the Multi-touch table

  • Park, Sang Bong;Kim, Beom jin
    • International Journal of Advanced Culture Technology
    • /
    • 제1권1호
    • /
    • pp.15-18
    • /
    • 2013
  • This paper describes new gestures on the Multi-touch table. The 2 new gestures with 3 fingers are used for minimizing of all windows that is already open and converting Aero mode. We also implement a FTIR (Frustrated Total Internal Reflection) Multi-touch table that consists of sheet of acrylic, infrared LEDs, camera and rear projector. The operation of proposed gestures is verified on the implemented Multi-touch table.

  • PDF

표면근전도 신호를 활용한 CNN 기반 한국 지화숫자 인식을 위한 아래팔 근육과 전극 위치에 관한 연구 (Study on Forearm Muscles and Electrode Placements for CNN based Korean Finger Number Gesture Recognition using sEMG Signals)

  • 박종준;권춘기
    • 한국산학기술학회논문지
    • /
    • 제19권8호
    • /
    • pp.260-267
    • /
    • 2018
  • 표면근전도(sEMG) 신호의 응용은 초기에는 단순히 근육 활성도의 유무를 판별하여 On/Off 의 스위치 기능으로 많이 사용되어 왔으나, 표면근전도 신호처리와 알고리즘의 발달로 휠체어의 방향 제어는 물론 수화를 인식하는 분야까지 확대되었다. 청각 장애인들의 언어 소통을 위한 중요한 수단인 수화나 지화는 미학습자와는 소통의 어려움이 존재해왔으며, 이러한 어려움을 해결하기 위해 수화나 지화를 인식하는 기술에 대한 연구가 지속적으로 수행되어 왔다. 최근에는, 수화나 지화 시연시에 활성화되는 근육의 신호를 활용하여 수화나 지화를 인식하는 방법이 중국 숫자지화 중심으로 적용되고 있는 추세이다. 하지만, 수화나 지화는 일반 음성언어와 마찬가지로 중국 숫자지화와 한국 숫자지화가 다르므로, 중국 숫자지화 시연시에 관여하는 근육이 한국 숫자지화 시연시에는 관여하지 않을 수가 있어, 인식률이 현저히 떨어질 수 있다. 그러므로 한국 숫자지화 시연시에 활성화되는 근육의 선정은 표면근전도 신호에 기반한 한국 숫자지화 인식률에 매우 중요하다. 하지만, 표면근전도 신호에 기반한 한국 숫자지화 인식에 대한 연구는 문헌에서 드물다. 본 연구에서는 표면근전도 신호를 활용한 한국수화 또는 한국지화의 인식에 관한 초기 연구로서, 한국 숫자지화를 시연시에 관여하는 아래팔근육을 제안하고 실험을 통하여 검증하기 위해 숫자 영(0)부터 다섯(5)의 여섯 가지 한국 숫자지화를 대상으로 인식하는 연구를 수행하였다. 이를 위해, 표면근전도 신호를 활용한 CNN 기반 지화인식 방법에 적용하여 여섯 가지 한국 숫자지화에 대하여 100%의 인식률을 확인함으로써, 여섯 가지 한국 숫자지화 인식을 위해 제안된 아래팔근육과 전극위치의 타당성을 검증하였다.

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • 대한인간공학회지
    • /
    • 제36권2호
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

Ability of children to perform touchscreen gestures and follow prompting techniques when using mobile apps

  • Yadav, Savita;Chakraborty, Pinaki;Kaul, Arshia;Pooja, Pooja;Gupta, Bhavya;Garg, Anchal
    • Clinical and Experimental Pediatrics
    • /
    • 제63권6호
    • /
    • pp.232-236
    • /
    • 2020
  • Background: Children today get access to smartphones at an early age. However, their ability to use mobile apps has not yet been studied in detail. Purpose: This study aimed to assess the ability of children aged 2-8 years to perform touchscreen gestures and follow prompting techniques, i.e., ways apps provide instructions on how to use them. Methods: We developed one mobile app to test the ability of children to perform various touchscreen gestures and another mobile app to test their ability to follow various prompting techniques. We used these apps in this study of 90 children in a kindergarten and a primary school in New Delhi in July 2019. We noted the touchscreen gestures that the children could perform and the most sophisticated prompting technique that they could follow. Results: Two- and 3-year-old children could not follow any prompting technique and only a minority (27%) could tap the touchscreen at an intended place. Four- to 6-year-old children could perform simple gestures like a tap and slide (57%) and follow instructions provided through animation (63%). Seven- and 8-year-old children could perform more sophisticated gestures like dragging and dropping (30%) and follow instructions provided in audio and video formats (34%). We observed a significant difference between the number of touchscreen gestures that the children could perform and the number of prompting techniques that they could follow (F=544.0407, P<0.05). No significant difference was observed in the performance of female versus male children (P>0.05). Conclusion: Children gradually learn to use mobile apps beginning at 2 years of age. They become comfortable performing single-finger gestures and following nontextual prompting techniques by 8 years of age. We recommend that these results be considered in the development of mobile apps for children.

한국 영아의 초기 의사소통 : 몸짓의 발달 (The Development of Gesture in the Early Communication of Korean Infants)

  • 장유경;최윤영;김소연
    • 아동학회지
    • /
    • 제26권1호
    • /
    • pp.155-167
    • /
    • 2005
  • Korean infants' use of gesture was examined with 45 10-to 17-month olds. The mothers of infants were asked to check each word in the MacArthur Communicative Development Inventory-Korean (MCDI-K) vocabulary checklist if their infant had a gesture for a given word and to indicate what kind of early communicative behavior she showed in 5 different situations. The results show that infants in this study have 11 gestures, of which many are learned within the context of routines or games. Referential gestures were rarely reported. There was no positive correlation between the number of gestures and the number of expressive words. However, more qualitative measures on early communicative behaviors show that there was a positive correlation between "frequent use of gestures" and "try to communicate by verbal means".

  • PDF

A Notation Method for Three Dimensional Hand Gesture

  • Choi, Eun-Jung;Kim, Hee-Jin;Chung, Min-K.
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.541-550
    • /
    • 2012
  • Objective: The aim of this study is to suggest a notation method for three-dimensional hand gesture. Background: To match intuitive gestures with commands of products, various studies have tried to derive gestures from users. In this case, various gestures for a command are derived due to various users' experience. Thus, organizing the gestures systematically and identifying similar pattern of them have become one of important issues. Method: Related studies about gesture taxonomy and notating sign language were investigated. Results: Through the literature review, a total of five elements of static gesture were selected, and a total of three forms of dynamic gesture were identified. Also temporal variability(reputation) was additionally selected. Conclusion: A notation method which follows a combination sequence of the gesture elements was suggested. Application: A notation method for three dimensional hand gestures might be used to describe and organize the user-defined gesture systematically.