• Title/Summary/Keyword: Gesture UX

Search Result 16, Processing Time 0.021 seconds

Multimodal Interface Based on Novel HMI UI/UX for In-Vehicle Infotainment System

  • Kim, Jinwoo;Ryu, Jae Hong;Han, Tae Man
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.793-803
    • /
    • 2015
  • We propose a novel HMI UI/UX for an in-vehicle infotainment system. Our proposed HMI UI comprises multimodal interfaces that allow a driver to safely and intuitively manipulate an infotainment system while driving. Our analysis of a touchscreen interface-based HMI UI/UX reveals that a driver's use of such an interface while driving can cause the driver to be seriously distracted. Our proposed HMI UI/UX is a novel manipulation mechanism for a vehicle infotainment service. It consists of several interfaces that incorporate a variety of modalities, such as speech recognition, a manipulating device, and hand gesture recognition. In addition, we provide an HMI UI framework designed to be manipulated using a simple method based on four directions and one selection motion. Extensive quantitative and qualitative in-vehicle experiments demonstrate that the proposed HMI UI/UX is an efficient mechanism through which to manipulate an infotainment system while driving.

The Natural Way of Gestures for Interacting with Smart TV

  • Choi, Jin-Hae;Hong, Ji-Young
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.567-575
    • /
    • 2012
  • Objective: The aim of this study is to get an optimal mental model by investigating user's natural behavior for controlling smart TV by mid-air gestures and to identify which factor is most important for controlling behavior. Background: A lot of TV companies are trying to find simple controlling method for complex smart TV. Although plenty of gesture studies proposing they could get possible alternatives to resolve this pain-point, however, there is no fitted gesture work for smart TV market. So it is needed to find optimal gestures for it. Method: (1) Eliciting core control scene by in-house study. (2) Observe and analyse 20 users' natural behavior as types of hand-held devices and control scene. We also made taxonomies for gestures. Results: Users' are trying to do more manipulative gestures than symbolic gestures when they try to continuous control. Conclusion: The most natural way to control smart TV on the remote with gestures is give user a mental model grabbing and manipulating virtual objects in the mid-air. Application: The results of this work might help to make gesture interaction guidelines for smart TV.

Improvement of Gesture Recognition using 2-stage HMM (2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구)

  • Jung, Hwon-Jae;Park, Hyeonjun;Kim, Donghan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

Investigating Smart TV Gesture Interaction Based on Gesture Types and Styles

  • Ahn, Junyoung;Kim, Kyungdoh
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.109-121
    • /
    • 2017
  • Objective: This study aims to find suitable types and styles for gesture interaction as remote control on smart TVs. Background: Smart TV is being developed rapidly in the world, and gesture interaction has a wide range of research areas, especially based on vision techniques. However, most studies are focused on the gesture recognition technology. Also, not many previous studies of gestures types and styles on smart TVs were carried out. Therefore, it is necessary to check what users prefer in terms of gesture types and styles for each operation command. Method: We conducted an experiment to extract the target user manipulation commands required for smart TVs and select the corresponding gestures. To do this, we looked at gesture styles people use for every operation command, and checked whether there are any gesture styles they prefer over others. Through these results, this study was carried out with a process selecting smart TV operation commands and gestures. Results: Eighteen TV commands have been used in this study. With agreement level as a basis, we compared the six types of gestures and five styles of gestures for each command. As for gesture type, participants generally preferred a gesture of Path-Moving type. In the case of Pan and Scroll commands, the highest agreement level (1.00) of 18 commands was shown. As for gesture styles, the participants preferred a manipulative style in 11 commands (Next, Previous, Volume up, Volume down, Play, Stop, Zoom in, Zoom out, Pan, Rotate, Scroll). Conclusion: By conducting an analysis on user-preferred gestures, nine gesture commands are proposed for gesture control on smart TVs. Most participants preferred Path-Moving type and Manipulative style gestures based on the actual operations. Application: The results can be applied to a more advanced form of the gestures in the 3D environment, such as a study on VR. The method used in this study will be utilized in various domains.

Mobile Browser UX Based on Mobile User Behavior (모바일 사용 행태에 따른 모바일 브라우저 UX)

  • Lee, Kate T.S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.547-551
    • /
    • 2010
  • In mobile browser two mental models coexist; one for mobile users and the other for PC users. In this research shows that users apply these two mental models simultaneously while they use mobile browsers. However cases where these two mental models conflict with each other, rapid deterioration of usability of the UX based on the mobile user's mental model was evident. Also usability of mobile user interfaces for use cases like "View Mode" or "Copy and Send Mode" were also poor, and the research shows that these "Modes" could be substituted by gesture interaction with which users were already familiar.

Mouse Gesture Design Based on Mental Model (심성모형 기반의 마우스 제스처 개발)

  • Seo, Hye Kyung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.39 no.3
    • /
    • pp.163-171
    • /
    • 2013
  • Various web browsers offer mouse gesture functions because they are convenient input methods. Mouse gestures enable users to move to the previous page or tab without clicking its relevant icon or menu of the web browser. To maximize the efficiency of mouse gestures, they should be designed to match users' mental models. Mental models of human beings are used to make accurate predictions and reactions when certain information has been recognized by humans. This means providing users with appropriate information about mental models will lead to fast understanding and response. A cognitive response test was performed in order to evaluate whether the mouse gestures easily associate with their respective functional meanings or not. After extracting mouse gestures which needed improvement, those were redesigned to reduce cognitive load via sketch maps. The methods presented in this study will be of help for evaluating and designing mouse gestures.

Multi - Modal Interface Design for Non - Touch Gesture Based 3D Sculpting Task (비접촉식 제스처 기반 3D 조형 태스크를 위한 다중 모달리티 인터페이스 디자인 연구)

  • Son, Minji;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.5
    • /
    • pp.177-190
    • /
    • 2017
  • This research aims to suggest a multimodal non-touch gesture interface design to improve the usability of 3D sculpting task. The task and procedure of design sculpting of users were analyzed across multiple circumstances from the physical sculpting to computer software. The optimal body posture, design process, work environment, gesture-task relationship, the combination of natural hand gesture and arm movement of designers were defined. The preliminary non-touch 3D S/W were also observed and natural gesture interaction, visual metaphor of UI and affordance for behavior guide were also designed. The prototype of gesture based 3D sculpting system were developed for validation of intuitiveness and learnability in comparison to the current S/W. The suggested gestures were proved with higher performance as a result in terms of understandability, memorability and error rate. Result of the research showed that the gesture interface design for productivity system should reflect the natural experience of users in previous work domain and provide appropriate visual - behavioral metaphor.

An Outlook for Interaction Experience in Next-generation Television

  • Kim, Sung-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.557-565
    • /
    • 2012
  • Objective: This paper focuses on the new trend of applying NUI(natural user interface) such as gesture interaction into television and investigates on the design improvement needed in application. The intention is to find better design direction of NUI on television context, which will contribute to making new features and behavioral changes occurring in next-generation television more practically usable and meaningful use experience elements. Background: Traditional television is rapidly evolving into next-generation television thanks to the influence of "smartness" from mobile domain. A number of new features and behavioral changes occurred from such evolution are on their way to be characterized as the new experience elements of next-generation television. Method: A series of expert review by television UX professionals based on AHP (Analytic Hierarchy Process) was conducted to check on the "relative appropriateness" of applying gesture interaction to a number of selected television user experience scenarios. Conclusion: It is critical not to indiscriminately apply new interaction techniques like gesture into television. It may be effective in demonstrating new technology but generally results in poor user experience. It is imperative to conduct consistent validation of its practical appropriateness in real context. Application: The research will be helpful in applying gesture interaction in next-generation television to bring optimal user experience in.

Study on Gesture and Voice-based Interaction in Perspective of a Presentation Support Tool

  • Ha, Sang-Ho;Park, So-Young;Hong, Hye-Soo;Kim, Nam-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.593-599
    • /
    • 2012
  • Objective: This study aims to implement a non-contact gesture-based interface for presentation purposes and to analyze the effect of the proposed interface as information transfer assisted device. Background: Recently, research on control device using gesture recognition or speech recognition is being conducted with rapid technological growth in UI/UX area and appearance of smart service products which requires a new human-machine interface. However, few quantitative researches on practical effects of the new interface type have been done relatively, while activities on system implementation are very popular. Method: The system presented in this study is implemented with KINECT$^{(R)}$ sensor offered by Microsoft Corporation. To investigate whether the proposed system is effective as a presentation support tool or not, we conduct experiments by giving several lectures to 40 participants in both a traditional lecture room(keyboard-based presentation control) and a non-contact gesture-based lecture room(KINECT-based presentation control), evaluating their interests and immersion based on contents of the lecture and lecturing methods, and analyzing their understanding about contents of the lecture. Result: We check that whether the gesture-based presentation system can play effective role as presentation supporting tools or not depending on the level of difficulty of contents using ANOVA. Conclusion: We check that a non-contact gesture-based interface is a meaningful tool as a sportive device when delivering easy and simple information. However, the effect can vary with the contents and the level of difficulty of information provided. Application: The results presented in this paper might help to design a new human-machine(computer) interface for communication support tools.

User Interface Design Platform based on Usage Log Analysis (사용성 로그 분석 기반의 사용자 인터페이스 설계 플랫폼)

  • Kim, Ahyoung;Lee, Junwoo;Kim, Mucheol
    • The Journal of Society for e-Business Studies
    • /
    • v.21 no.4
    • /
    • pp.151-159
    • /
    • 2016
  • The user interface is an important factor in providing efficient services to application users. In particular, mobile applications that can be executed anytime and anywhere have a higher priority of usability than applications in other domains.Previous studies have used prototype and storyboard methods to improve the usability of applications. However, this approach has limitations in continuously identifying and improving the usability problems of a particular application. Therefore, in this paper, we propose a usability analysis method using touch gesture data. It could identify and improve the UI / UX problem of the application continuously by grasping the intention of the user after the application is distributed.