• 제목/요약/키워드: user gestures

검색결과 156건 처리시간 0.033초

터치스크린 기반 웹브라우저 조작을 위한 손가락 제스처 개발 (Development of Finger Gestures for Touchscreen-based Web Browser Operation)

  • 남종용;최재호;정의승
    • 대한인간공학회지
    • /
    • 제27권4호
    • /
    • pp.109-117
    • /
    • 2008
  • Compared to the existing PC which uses a mouse and a keyboard, the touchscreen-based portable PC allows the user to use fingers, requiring new operation methods. However, current touchscreen-based web browser operations in many cases involve merely having fingers move simply like a mouse and click, or not corresponding well to the user's sensitivity and the structure of one's index finger, making itself difficult to be used during walking. Therefore, the goal of this study is to develop finger gestures which facilitate the interaction between the interface and the user, and make the operation easier. First, based on the frequency of usage in the web browser and preference, top eight functions were extracted. Then, the users' structural knowledge was visualized through sketch maps, and the finger gestures which were applicable in touchscreens were derived through the Meaning in Mediated Action method. For the front/back page, and up/down scroll functions, directional gestures were derived, and for the window closure, refresh, home and print functions, letter-type and icon-type gestures were drawn. A validation experiment was performed to compare the performance between existing operation methods and the proposed one in terms of execution time, error rate, and preference, and as a result, directional gestures and letter-type gestures showed better performance than the existing methods. These results suggest that not only during the operation of touchscreen-based web browser in portable PC but also during the operation of telematics-related functions in automobile, PDA and so on, the new gestures can be used to make operation easier and faster.

A Unit Touch Gesture Model of Performance Time Prediction for Mobile Devices

  • Kim, Damee;Myung, Rohae
    • 대한인간공학회지
    • /
    • 제35권4호
    • /
    • pp.277-291
    • /
    • 2016
  • Objective: The aim of this study is to propose a unit touch gesture model, which would be useful to predict the performance time on mobile devices. Background: When estimating usability based on Model-based Evaluation (MBE) in interfaces, the GOMS model measured 'operators' to predict the execution time in the desktop environment. Therefore, this study used the concept of operator in GOMS for touch gestures. Since the touch gestures are comprised of possible unit touch gestures, these unit touch gestures can predict to performance time with unit touch gestures on mobile devices. Method: In order to extract unit touch gestures, manual movements of subjects were recorded in the 120 fps with pixel coordinates. Touch gestures are classified with 'out of range', 'registration', 'continuation' and 'termination' of gesture. Results: As a results, six unit touch gestures were extracted, which are hold down (H), Release (R), Slip (S), Curved-stroke (Cs), Path-stroke (Ps) and Out of range (Or). The movement time predicted by the unit touch gesture model is not significantly different from the participants' execution time. The measured six unit touch gestures can predict movement time of undefined touch gestures like user-defined gestures. Conclusion: In conclusion, touch gestures could be subdivided into six unit touch gestures. Six unit touch gestures can explain almost all the current touch gestures including user-defined gestures. So, this model provided in this study has a high predictive power. The model presented in the study could be utilized to predict the performance time of touch gestures. Application: The unit touch gestures could be simply added up to predict the performance time without measuring the performance time of a new gesture.

Towards Establishing a Touchless Gesture Dictionary based on User Participatory Design

  • Song, Hae-Won;Kim, Huhn
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.515-523
    • /
    • 2012
  • Objective: The aim of this study is to investigate users' intuitive stereotypes on non-touch gestures and establish the gesture dictionary that can be applied to gesture-based interaction designs. Background: Recently, the interaction based on non-touch gestures is emerging as an alternative for natural interactions between human and systems. However, in order for non-touch gestures to become a universe interaction method, the studies on what kinds of gestures are intuitive and effective should be prerequisite. Method: In this study, as applicable domains of non-touch gestures, four devices(i.e. TV, Audio, Computer, Car Navigation) and sixteen basic operations(i.e. power on/off, previous/next page, volume up/down, list up/down, zoom in/out, play, cancel, delete, search, mute, save) were drawn from both focus group interview and survey. Then, a user participatory design was performed. The participants were requested to design three gestures suitable to each operation in the devices, and they evaluated intuitiveness, memorability, convenience, and satisfaction of their derived gestures. Through the participatory design, agreement scores, frequencies and planning times of each distinguished gesture were measured. Results: The derived gestures were not different in terms of four devices. However, diverse but common gestures were derived in terms of kinds of operations. In special, manipulative gestures were suitable for all kinds of operations. On the contrary, semantic or descriptive gestures were proper to one-shot operations like power on/off, play, cancel or search. Conclusion: The touchless gesture dictionary was established by mapping intuitive and valuable gestures onto each operation. Application: The dictionary can be applied to interaction designs based on non-touch gestures. Moreover, it will be used as a basic reference for standardizing non-touch gestures.

소형 원통형 디스플레이를 위한 사용자 정의 핸드 제스처 (User-Defined Hand Gestures for Small Cylindrical Displays)

  • 김효영;김희선;이동언;박지형
    • 한국콘텐츠학회논문지
    • /
    • 제17권3호
    • /
    • pp.74-87
    • /
    • 2017
  • 본 연구는 아직 제품으로 등장하지 않은 플렉시블 디스플레이 기반의 소형 원통형 디스플레이를 위한 사용자 정의 기반의 핸드 제스처 도출을 목표로 한다. 이를 위하여 먼저 소형 원통형 디스플레이의 크기와 기능을 정의하고, 해당 기능 수행을 위한 태스크를 도출하였다. 이후 가상의 원통형 디스플레이 인터페이스와 이를 조작하기 위한 물리적 오브젝트를 각각 구현하여 사용자들이 실제 원통형 디스플레이를 조작하는 것과 유사한 환경을 제작하였고, 제스처 도출을 위한 태스크를 수행했을 경우 발생하는 결과를 가상의 원통형 디스플레이에 제시하여 사용자들이 해당 조작에 적합하다고 판단되는 제스처를 정의할 수 있도록 하였다. 도출된 각 태스크 별 제스처 그룹에서 빈도수를 토대로 대표 제스처를 선정하였으며, 각 제스처에 대한 의견 일치 점수를 도출하였다. 마지막으로 제스처 분석 및 사용자 인터뷰를 기반으로 제스처 도출에 활용되는 멘탈 모델을 관찰 하였다.

The Natural Way of Gestures for Interacting with Smart TV

  • Choi, Jin-Hae;Hong, Ji-Young
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.567-575
    • /
    • 2012
  • Objective: The aim of this study is to get an optimal mental model by investigating user's natural behavior for controlling smart TV by mid-air gestures and to identify which factor is most important for controlling behavior. Background: A lot of TV companies are trying to find simple controlling method for complex smart TV. Although plenty of gesture studies proposing they could get possible alternatives to resolve this pain-point, however, there is no fitted gesture work for smart TV market. So it is needed to find optimal gestures for it. Method: (1) Eliciting core control scene by in-house study. (2) Observe and analyse 20 users' natural behavior as types of hand-held devices and control scene. We also made taxonomies for gestures. Results: Users' are trying to do more manipulative gestures than symbolic gestures when they try to continuous control. Conclusion: The most natural way to control smart TV on the remote with gestures is give user a mental model grabbing and manipulating virtual objects in the mid-air. Application: The results of this work might help to make gesture interaction guidelines for smart TV.

손 동작을 통한 인간과 컴퓨터간의 상호 작용 (Recognition of Hand gesture to Human-Computer Interaction)

  • 이래경;김성신
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 하계학술대회 논문집 D
    • /
    • pp.2930-2932
    • /
    • 2000
  • In this paper. a robust gesture recognition system is designed and implemented to explore the communication methods between human and computer. Hand gestures in the proposed approach are used to communicate with a computer for actions of a high degree of freedom. The user does not need to wear any cumbersome devices like cyber-gloves. No assumption is made on whether the user is wearing any ornaments and whether the user is using the left or right hand gestures. Image segmentation based upon the skin-color and a shape analysis based upon the invariant moments are combined. The features are extracted and used for input vectors to a radial basis function networks(RBFN). Our "Puppy" robot is employed as a testbed. Preliminary results on a set of gestures show recognition rates of about 87% on the a real-time implementation.

  • PDF

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • 대한인간공학회지
    • /
    • 제36권2호
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

비디오 게임 인터페이스를 위한 인식 기반 제스처 분할 (Recognition-Based Gesture Spotting for Video Game Interface)

  • 한은정;강현;정기철
    • 한국멀티미디어학회논문지
    • /
    • 제8권9호
    • /
    • pp.1177-1186
    • /
    • 2005
  • 키보드나 조이스틱 대신 카메라를 통해 입력되는 사용자의 제스처를 이용하는 시각 기반 비디오 게임 인터페이스를 사용할 때 자연스러운 동작을 허용하기 위해서는, 연속 제스처를 인식할 수 있고 사용자의 의미없는 동작이 허용되어야 한다. 본 논문에서는 비디오 게임 인터페이스를 위한 인식과 분할을 결합한 제스처 인식 방법을 제안하며, 이는 주어진 연속 영상에서 의미있는 동작을 인식함과 동시에 의미없는 동작을 구별하는 방법이다. 제안된 방법을 사용자의 상체 제스처를 게임의 명령어로 사용하는 1인칭 액션 게임인 Quke II 게임에 적용한 결과, 연속 제스처에 대해 평균 $93.36\%$의 분할 결과로써 비디오 게임 인터페이스에서 유용한 성능을 낼 수 있음을 보였다.

  • PDF

A Comparison of the Characteristics between Single and Double Finger Gestures for Web Browsers

  • Park, Jae-Kyu;Lim, Young-Jae;Jung, Eui-S.
    • 대한인간공학회지
    • /
    • 제31권5호
    • /
    • pp.629-636
    • /
    • 2012
  • Objective: The purpose of this study is to compare the characteristics of single and double finger gestures related on the web browser and to extract the appropriate finger gestures. Background: As electronic equipment emphasizes miniaturization for improving portability various interfaces are being developed as input devices. Electronic devices are made smaller, the gesture recognition technology using the touch-based interface is favored for easy editing. In addition, user focus primarily on the simplicity of intuitive interfaces which propels further research of gesture based interfaces. In particular, the fingers in these intuitive interfaces are simple and fast which are users friendly. Recently, the single and double finger gestures are becoming more popular so more applications for these gestures are being developed. However, systems and software that employ such finger gesture lack consistency in addition to having unclear standard and guideline development. Method: In order to learn the application of these gestures, we performed the sketch map method which happens to be a method for memory elicitation. In addition, we used the MIMA(Meaning in Mediated Action) method to evaluate gesture interface. Results: This study created appropriate gestures for intuitive judgment. We conducted a usability test which consisted of single and double finger gestures. The results showed that double finger gestures had less performance time faster than single finger gestures. Single finger gestures are a wide satisfaction difference between similar type and difference type. That is, single finger gestures can judge intuitively in a similar type but it is difficult to associate functions in difference type. Conclusion: This study was found that double finger gesture was effective to associate functions for web navigations. Especially, this double finger gesture could be effective on associating complex forms such as curve shaped gestures. Application: This study aimed to facilitate the design products which utilized finger and hand gestures.

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • 제7권1호
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.