• Title/Summary/Keyword: Gesture application

Search Result 110, Processing Time 0.022 seconds

The Natural Way of Gestures for Interacting with Smart TV

  • Choi, Jin-Hae;Hong, Ji-Young
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.567-575
    • /
    • 2012
  • Objective: The aim of this study is to get an optimal mental model by investigating user's natural behavior for controlling smart TV by mid-air gestures and to identify which factor is most important for controlling behavior. Background: A lot of TV companies are trying to find simple controlling method for complex smart TV. Although plenty of gesture studies proposing they could get possible alternatives to resolve this pain-point, however, there is no fitted gesture work for smart TV market. So it is needed to find optimal gestures for it. Method: (1) Eliciting core control scene by in-house study. (2) Observe and analyse 20 users' natural behavior as types of hand-held devices and control scene. We also made taxonomies for gestures. Results: Users' are trying to do more manipulative gestures than symbolic gestures when they try to continuous control. Conclusion: The most natural way to control smart TV on the remote with gestures is give user a mental model grabbing and manipulating virtual objects in the mid-air. Application: The results of this work might help to make gesture interaction guidelines for smart TV.

A Framework for Designing Closed-loop Hand Gesture Interface Incorporating Compatibility between Human and Monocular Device

  • Lee, Hyun-Soo;Kim, Sang-Ho
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.533-540
    • /
    • 2012
  • Objective: This paper targets a framework of a hand gesture based interface design. Background: While a modeling of contact-based interfaces has focused on users' ergonomic interface designs and real-time technologies, an implementation of a contactless interface needs error-free classifications as an essential prior condition. These trends made many research studies concentrate on the designs of feature vectors, learning models and their tests. Even though there have been remarkable advances in this field, the ignorance of ergonomics and users' cognitions result in several problems including a user's uneasy behaviors. Method: In order to incorporate compatibilities considering users' comfortable behaviors and device's classification abilities simultaneously, classification-oriented gestures are extracted using the suggested human-hand model and closed-loop classification procedures. Out of the extracted gestures, the compatibility-oriented gestures are acquired though human's ergonomic and cognitive experiments. Then, the obtained hand gestures are converted into a series of hand behaviors - Handycon - which is mapped into several functions in a mobile device. Results: This Handycon model guarantees users' easy behavior and helps fast understandings as well as the high classification rate. Conclusion and Application: The suggested framework contributes to develop a hand gesture-based contactless interface model considering compatibilities between human and device. The suggested procedures can be applied effectively into other contactless interface designs.

Study on Gesture and Voice-based Interaction in Perspective of a Presentation Support Tool

  • Ha, Sang-Ho;Park, So-Young;Hong, Hye-Soo;Kim, Nam-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.593-599
    • /
    • 2012
  • Objective: This study aims to implement a non-contact gesture-based interface for presentation purposes and to analyze the effect of the proposed interface as information transfer assisted device. Background: Recently, research on control device using gesture recognition or speech recognition is being conducted with rapid technological growth in UI/UX area and appearance of smart service products which requires a new human-machine interface. However, few quantitative researches on practical effects of the new interface type have been done relatively, while activities on system implementation are very popular. Method: The system presented in this study is implemented with KINECT$^{(R)}$ sensor offered by Microsoft Corporation. To investigate whether the proposed system is effective as a presentation support tool or not, we conduct experiments by giving several lectures to 40 participants in both a traditional lecture room(keyboard-based presentation control) and a non-contact gesture-based lecture room(KINECT-based presentation control), evaluating their interests and immersion based on contents of the lecture and lecturing methods, and analyzing their understanding about contents of the lecture. Result: We check that whether the gesture-based presentation system can play effective role as presentation supporting tools or not depending on the level of difficulty of contents using ANOVA. Conclusion: We check that a non-contact gesture-based interface is a meaningful tool as a sportive device when delivering easy and simple information. However, the effect can vary with the contents and the level of difficulty of information provided. Application: The results presented in this paper might help to design a new human-machine(computer) interface for communication support tools.

A Unit Touch Gesture Model of Performance Time Prediction for Mobile Devices

  • Kim, Damee;Myung, Rohae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.35 no.4
    • /
    • pp.277-291
    • /
    • 2016
  • Objective: The aim of this study is to propose a unit touch gesture model, which would be useful to predict the performance time on mobile devices. Background: When estimating usability based on Model-based Evaluation (MBE) in interfaces, the GOMS model measured 'operators' to predict the execution time in the desktop environment. Therefore, this study used the concept of operator in GOMS for touch gestures. Since the touch gestures are comprised of possible unit touch gestures, these unit touch gestures can predict to performance time with unit touch gestures on mobile devices. Method: In order to extract unit touch gestures, manual movements of subjects were recorded in the 120 fps with pixel coordinates. Touch gestures are classified with 'out of range', 'registration', 'continuation' and 'termination' of gesture. Results: As a results, six unit touch gestures were extracted, which are hold down (H), Release (R), Slip (S), Curved-stroke (Cs), Path-stroke (Ps) and Out of range (Or). The movement time predicted by the unit touch gesture model is not significantly different from the participants' execution time. The measured six unit touch gestures can predict movement time of undefined touch gestures like user-defined gestures. Conclusion: In conclusion, touch gestures could be subdivided into six unit touch gestures. Six unit touch gestures can explain almost all the current touch gestures including user-defined gestures. So, this model provided in this study has a high predictive power. The model presented in the study could be utilized to predict the performance time of touch gestures. Application: The unit touch gestures could be simply added up to predict the performance time without measuring the performance time of a new gesture.

Towards Establishing a Touchless Gesture Dictionary based on User Participatory Design

  • Song, Hae-Won;Kim, Huhn
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.515-523
    • /
    • 2012
  • Objective: The aim of this study is to investigate users' intuitive stereotypes on non-touch gestures and establish the gesture dictionary that can be applied to gesture-based interaction designs. Background: Recently, the interaction based on non-touch gestures is emerging as an alternative for natural interactions between human and systems. However, in order for non-touch gestures to become a universe interaction method, the studies on what kinds of gestures are intuitive and effective should be prerequisite. Method: In this study, as applicable domains of non-touch gestures, four devices(i.e. TV, Audio, Computer, Car Navigation) and sixteen basic operations(i.e. power on/off, previous/next page, volume up/down, list up/down, zoom in/out, play, cancel, delete, search, mute, save) were drawn from both focus group interview and survey. Then, a user participatory design was performed. The participants were requested to design three gestures suitable to each operation in the devices, and they evaluated intuitiveness, memorability, convenience, and satisfaction of their derived gestures. Through the participatory design, agreement scores, frequencies and planning times of each distinguished gesture were measured. Results: The derived gestures were not different in terms of four devices. However, diverse but common gestures were derived in terms of kinds of operations. In special, manipulative gestures were suitable for all kinds of operations. On the contrary, semantic or descriptive gestures were proper to one-shot operations like power on/off, play, cancel or search. Conclusion: The touchless gesture dictionary was established by mapping intuitive and valuable gestures onto each operation. Application: The dictionary can be applied to interaction designs based on non-touch gestures. Moreover, it will be used as a basic reference for standardizing non-touch gestures.

Automatic Generation of Script-Based Robot Gesture and its Application to Steward Robot (스크립트 기반의 로봇 제스처 자동생성 방법 및 집사로봇에의 적용)

  • Kim, Heon-Hui;Lee, Hyong-Euk;Kim, Yong-Hwi;Park, Kwang-Hyun;Bien, Zeung-Nam
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.688-693
    • /
    • 2007
  • 본 논문은 인간과 로봇간의 효과적인 상호작용을 위한 로봇제스쳐의 자동생성 기법을 다룬다. 이는 텍스트 정보 만의 입력으로 의미 있는 단어에 대응되는 특정 제스쳐패턴이 자동적으로 생성되도록 하는 기법으로서 이를 위한 사전조사로 제스쳐가 출현하는 발화시점에서의 단어수집이 우선적으로 요구되었다. 본 논문은 이러한 분석을 위해 두 개 이상의 연속된 제스쳐 패턴을 효과적으로 표현할 수 있는 제스쳐 모델을 제안한다. 또한 제안된 모델이 적용되어 구축된 제스쳐DB와 스크립트 기법을 이용한 로봇제스쳐 자동생성 방법을 제안한다. 제스쳐 생성시스템은 규칙기반의 제스쳐 선택부와 스크립트 기반의 동작 계획부로 구성되고, 집사로봇의 안내기능에 대한 모의실험을 통해 그 효용성을 확인한다.

  • PDF

Gesture Recognition using Global and Partial Feature Information (전역 및 부분 특징 정보를 이용한 제스처 인식)

  • Lee, Yong-Jae;Lee, Chil-Woo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.8
    • /
    • pp.759-768
    • /
    • 2005
  • This paper describes an algorithm that can recognize gestures constructing subspace gesture symbols with hybrid feature information. The previous popular methods based on geometric feature and appearance have resulted in ambiguous output in case of recognizing between similar gesture because they use just the Position information of the hands, feet or bodily shape features. However, our proposed method can classify not only recognition of motion but also similar gestures by the partial feature information presenting which parts of body move and the global feature information including 2-dimensional bodily motion. And this method which is a simple and robust recognition algorithm can be applied in various application such surveillance system and intelligent interface systems.

Implementation of a DI Multi-Touch Display Using an Improved Touch-Points Detection and Gesture Recognition (개선된 터치점 검출과 제스쳐 인식에 의한 DI 멀티터치 디스플레이 구현)

  • Lee, Woo-Beom
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.1
    • /
    • pp.13-18
    • /
    • 2010
  • Most of the research in the multi-touch area is based on the FTIR(Frustrated Total Internal Re리ection), which is just implemented by using the previous approach. Moreover, there are not the software solutions to improve a performance in the multi touch-blobs detection or the user gesture recognition. Therefore, we implement a multi-touch table-top display that is based on the DI(Diffused Illumination), the improved touch-points detection and user gesture recognition. The proposed method supports a simultaneous transformation multi-touch command for objects in the running application. Also, the system latency time is reduced by the proposed ore-testing method in the multi touch-blobs detection processing. Implemented device is simulated by programming the Flash AS3 application in the TUIO(Tangible User Interface Object) environment that is based on the OSC(Open Sound Control) protocol. As a result, Our system shows the 37% system latency reduction, and is successful in the multi-touch gestures recognition.

Study on EMI Elimination and PLN Application in ELF Band for Romote Sensing with Electric Potentiometer (전위계차 센서를 이용한 원격센싱을 위한 ELF 대역 EMI 제거 및 PLN 응용 연구)

  • Jang, Jin Soo;Kim, Young Chul
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.33-38
    • /
    • 2015
  • In this paper, we propose the methods not only to eliminate ELF(Extremely Low Frequency) EMI(Electro-Magnetic Interference) noice for extending recognition distance, but also to utilize the the PLN for detecting starting instance of a hand gesture using electric potential sensor. First, we measure strength of electric field generated in the smart devices such as TV and phone, and minimize EMI through efficient arrangement of the sensors. Meanwhile, we utilize the 60 Hz PLN to extract the starting point of hand gesture. Thereafter, we eliminate the PLN generated in the smart device and circuit of sensors. And then, we shield the sensors from an electric noise generated from devices. Finally, through analyzing the frequency components according to the gesture of target, we use the low pass filter and the Kalman filter for elimination of remaining electric noise. We analyze and evaluate the proposed ELF-band EMI eliminating method for non-contact remote sensing of the EPS(Electric Potential Sensor). Combined with a detecting technique of gesture starting point, the recognition distance for gestures has been proven to be extended to more than 3m, which is critical for real application.

Remote Control System using Face and Gesture Recognition based on Deep Learning (딥러닝 기반의 얼굴과 제스처 인식을 활용한 원격 제어)

  • Hwang, Kitae;Lee, Jae-Moon;Jung, Inhwan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.115-121
    • /
    • 2020
  • With the spread of IoT technology, various IoT applications using facial recognition are emerging. This paper describes the design and implementation of a remote control system using deep learning-based face recognition and hand gesture recognition. In general, an application system using face recognition consists of a part that takes an image in real time from a camera, a part that recognizes a face from the image, and a part that utilizes the recognized result. Raspberry PI, a single board computer that can be mounted anywhere, has been used to shoot images in real time, and face recognition software has been developed using tensorflow's FaceNet model for server computers and hand gesture recognition software using OpenCV. We classified users into three groups: Known users, Danger users, and Unknown users, and designed and implemented an application that opens automatic door locks only for Known users who have passed both face recognition and hand gestures.