• 제목/요약/키워드: Interaction Gesture

검색결과 226건 처리시간 0.024초

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • 대한인간공학회지
    • /
    • 제34권5호
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • 대한인간공학회지
    • /
    • 제36권2호
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

Design of Contactless Gesture-based Rhythm Action Game Interface for Smart Mobile Devices

  • Ju, Da-Young
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.585-591
    • /
    • 2012
  • Objective: The aim of this study is to propose the contactless gesture-based interface on smart mobile devices for especially rhythm action games. Background: Most existing approaches about interactions of smart mobile games are tab on the touch screen. However that way is such undesirable for someone or for sometimes, because of the disabled person, or the inconvenience that users need to touch/tab specific devices. Moreover more importantly, new interaction can derive new possibilities from stranded game genre. Method: In this paper, I present a smart mobile game with contactless gesture-based interaction and the interfaces using computer vision technology. Discovering the gestures which are easy to recognize and research of interaction system that fits to game on smart mobile device are conducted as previous studies. A combination between augmented reality technique and contactless gesture interaction is also tried. Results: The rhythm game allows a user to interact with smart mobile devices using hand gestures, without touching or tabbing the screen. Moreover users can feel fun in the game as other games. Conclusion: Evaluation results show that users make low failure numbers, and the game is able to recognize gestures with quite high precision in real time. Therefore the contactless gesture-based interaction has potentials to smart mobile game. Application: The results are applied to the commercial game application.

체화된 인지의 관점에서 과학관 제스처 기반 전시물의 관람객 상호작용 분석 (Interaction Analysis Between Visitors and Gesture-based Exhibits in Science Centers from Embodied Cognition Perspectives)

  • 소효정;이지향;오승재
    • 한국과학예술포럼
    • /
    • 제25권
    • /
    • pp.227-240
    • /
    • 2016
  • 본 연구는 체화된 인지의 관점에서 과학관 방문객들이 다양한 유형의 체감형 인터랙티브 전시물과 상호작용하는 패턴의 분석을 통해 제스처 기반 전시물의 효과성 및 향후 전시물 설계에 주는 시사점을 모색하고자 하였다. 본 연구는 과학관에 설치된 네 개의 제스처 기반 전시물과 관람객이 상호작용하는 패턴을 측정하였다. 더불어 관람객이 제스처 기반 전시물에 대해 느끼는 관점을 알아보기 위해 총 14 그룹의 관람객과 반 구조화된 형식의 인터뷰를 실시하였다. 마지막으로 제스처 기반 전시물 설계의 실제 및 장단점을 분석하기 위해 네 명의 전문가와 인터뷰를 실시하였다. 연구결과 관람객이 제스처 기반 전시물과 상호 작용함에 있어 평균적으로 총 관람 시간이 짧고, 몰입도가 높은 관람자 수가 많지 않음을 확인할 수 있었다. 전문가와 관람객 모두 현재 과학관의 제스처 기반 전시물은 신기함에 집중된 경향이 있으며, 체험과 학습사이의 연계성에 대해서는 명확한 인식이 발견되지 않았다. 본 연구 결과를 종합하여 제시하는 제스처 기반 전시물 설계 시 고려점은 다음과 같다. 첫째, 관람객의 초기 참여를 유도하기 위해서는 전시물의 사용성과 목적성이 설계 초기 단계에서부터 고려되어야 한다. 둘째, 의미 있는 상호작용을 위해서는 초기 참여를 유지하는 것이 중요하며, 제스처 기반 전시물이 흥미를 위한 단순한 상호작용이 아닌 지적인 호기심을 자극하는 형태로 변모해야 한다. 셋째, 체화인지 관점에서 특정 제스처가 의미하는 메타포가 학습에 어떠한 연관성을 가지는지를 고려하여 설계에 반영해야 한다. 마지막으로 본 연구는 향후 제스처 기반 전시물이 관람객간의 대화와 탐구를 통한 상호작용을 유도하고 적응성을 가진 설계를 지향해야 함을 제시한다.

Gesture based Input Device: An All Inertial Approach

  • Chang Wook;Bang Won-Chul;Choi Eun-Seok;Yang Jing;Cho Sung-Jung;Cho Joon-Kee;Oh Jong-Koo;Kim Dong-Yoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권3호
    • /
    • pp.230-245
    • /
    • 2005
  • In this paper, we develop a gesture-based input device equipped with accelerometers and gyroscopes. The sensors measure the inertial measurements, i.e., accelerations and angular velocities produced by the movement of the system when a user is inputting gestures on a plane surface or in a 3D space. The gyroscope measurements are integrated to give orientation of the device and consequently used to compensate the accelerations. The compensated accelerations are doubly integrated to yield the position of the device. With this approach, a user's gesture input trajectories can be recovered without any external sensors. Three versions of motion tracking algorithms are provided to cope with wide spectrum of applications. Then, a Bayesian network based recognition system processes the recovered trajectories to identify the gesture class. Experimental results convincingly show the feasibility and effectiveness of the proposed gesture input device. In order to show practical use of the proposed input method, we implemented a prototype system, which is a gesture-based remote controller (Magic Wand).

학습용 에이전트의 제스처와 얼굴표정이 학습이해도 및 의인화 효과에 미치는 영향 (The Impact of Gesture and Facial Expression on Learning Comprehension and Persona Effect of Pedagogical Agent)

  • 류지헌;유지희
    • 감성과학
    • /
    • 제16권3호
    • /
    • pp.281-292
    • /
    • 2013
  • 이 연구의 목적은 학습용 에이전트의 비언어적 의사소통이 의인화 효과에 미치는 영향을 검증하는 것이다. 대학생 56명을 대상으로 실험이 진행되었으며, 비언어적 의사소통은 제스처(지시적 제스처 vs. 대화적 제스처)와 얼굴표정(적용유무)에 의해서 구현되었다. 학습용 에이전트에 적용된 제스처는 지시적 제스처와 대화적 제스처였다. 지시적 제스처는 주의집중 유도 가설에 의해서 학습용 에이전트의 제스처가 시각단서의 역할을 수행할 것이라는 가설에 근거하고 있다. 대화적 제스처는 사회성 가설에 의한 것으로 학습용 에이전트의 사회적 상호작용을 촉진시키기 위한 것이다. 얼굴표정은 주로 사회성 가설을 지지하는 설계원리로 보았다. 의인화 효과 측정에서는 학습개입에 대한 상호작용이 유의미했다. 대화적 제스처 조건에서 얼굴표정이 있고 없음에 따라서 학습개입에 대한 의인화 효과가 유의미했다. 대화적 제스처와 얼굴표정이 적용되면 학습개입을 촉진하는 것으로 나타났다. 이 연구는 두 가지 시사점을 제공하고 있다. 첫째, 얼굴표정은 학습개입에서 중요한 역할을 한다. 둘째, 제스처와 더불어 얼굴표정과 제스처가 동시에 적용되어야 한다.

  • PDF

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

유사홀로그램 가시화 기반 가상 휴먼의 제스쳐 상호작용 영향 분석 (Exploring the Effects of Gesture Interaction on Co-presence of a Virtual Human in a Hologram-like System)

  • Kim, Daewhan;Jo, Dongsik
    • 한국정보통신학회논문지
    • /
    • 제24권10호
    • /
    • pp.1390-1393
    • /
    • 2020
  • Recently, a hologram-like system and a virtual human to provide a realistic experience has been serviced in various places such as musical performance and museum exhibition. Also, the realistically responded virtual human in the hologram-like system need to be expressed in a way that matches the users' interaction. In this paper, to improve the feeling of being in the same space with a virtual human in the hologram-like system, user's gesture based interactive contents were presented, and the effectiveness of interaction was evaluated. Our approach was found that the gesture based interaction was provided a higher sense of co-presence for immersion with the virtual human.

A Memory-efficient Hand Segmentation Architecture for Hand Gesture Recognition in Low-power Mobile Devices

  • Choi, Sungpill;Park, Seongwook;Yoo, Hoi-Jun
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제17권3호
    • /
    • pp.473-482
    • /
    • 2017
  • Hand gesture recognition is regarded as new Human Computer Interaction (HCI) technologies for the next generation of mobile devices. Previous hand gesture implementation requires a large memory and computation power for hand segmentation, which fails to give real-time interaction with mobile devices to users. Therefore, in this paper, we presents a low latency and memory-efficient hand segmentation architecture for natural hand gesture recognition. To obtain both high memory-efficiency and low latency, we propose a streaming hand contour tracing unit and a fast contour filling unit. As a result, it achieves 7.14 ms latency with only 34.8 KB on-chip memory, which are 1.65 times less latency and 1.68 times less on-chip memory, respectively, compare to the best-in-class.