• Title/Summary/Keyword: Hand-based User Interface

Search Result 126, Processing Time 0.025 seconds

Hand Gesture based Manipulation of Meeting Data in Teleconference (핸드제스처를 이용한 원격미팅 자료 인터페이스)

  • Song, Je-Hoon;Choi, Ki-Ho;Kim, Jong-Won;Lee, Yong-Gu
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.2
    • /
    • pp.126-136
    • /
    • 2007
  • Teleconferences have been used in business sectors to reduce traveling costs. Traditionally, specialized telephones that enabled multiparty conversations were used. With the introduction of high speed networks, we now have high definition videos that add more realism in the presence of counterparts who could be thousands of miles away. This paper presents a new technology that adds even more realism by telecommunicating with hand gestures. This technology is part of a teleconference system named SMS (Smart Meeting Space). In SMS, a person can use hand gestures to manipulate meeting data that could be in the form of text, audio, video or 3D shapes. Fer detecting hand gestures, a machine learning algorithm called SVM (Support Vector Machine) has been used. For the prototype system, a 3D interaction environment has been implemented with $OpenGL^{TM}$, where a 3D human skull model can be grasped and moved in 6-DOF during a remote conversation between distant persons.

The User Interface of Button Type for Stereo Video-See-Through (Stereo Video-See-Through를 위한 버튼형 인터페이스)

  • Choi, Young-Ju;Seo, Young-Duek
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.2
    • /
    • pp.47-54
    • /
    • 2007
  • This paper proposes a user interface based on video see-through environment which shows the images via stereo-cameras so that the user can control the computer systems or other various processes easily. We include an AR technology to synthesize virtual buttons; the graphic images are overlaid on the captured frames taken by the camera real-time. We search for the hand position in the frames to judge whether or not the user selects the button. The result of judgment is visualized through changing of the button color. The user can easily interact with the system by selecting the virtual button in the screen with watching the screen and moving her fingers at the air.

  • PDF

Implementation of DID interface using gesture recognition (제스쳐 인식을 이용한 DID 인터페이스 구현)

  • Lee, Sang-Hun;Kim, Dae-Jin;Choi, Hong-Sub
    • Journal of Digital Contents Society
    • /
    • v.13 no.3
    • /
    • pp.343-352
    • /
    • 2012
  • In this paper, we implemented a touchless interface for DID(Digital Information Display) system using gesture recognition technique which includes both hand motion and hand shape recognition. Especially this touchless interface without extra attachments gives user both easier usage and spatial convenience. For hand motion recognition, two hand-motion's parameters such as a slope and a velocity were measured as a direction-based recognition way. And extraction of hand area image utilizing YCbCr color model and several image processing methods were adopted to recognize a hand shape recognition. These recognition methods are combined to generate various commands, such as, next-page, previous-page, screen-up, screen-down and mouse -click in oder to control DID system. Finally, experimental results showed the performance of 93% command recognition rate which is enough to confirm the possible application to commercial products.

A User Study on Information Searching Behaviors for Designing User-centered Query Interface of Content-Based Music Information Retrieval System (내용기반 음악정보 검색시스템을 위한 이용자 중심의 질의 인터페이스 설계에 관한 연구)

  • Lee, Yoon-Joo;Moon, Sung-Been
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • The purpose of this study is to observe and analyze information searching behaviors of various user groups in different access modes for designing user-centered query interface of content-based Music Information Retrieval System(MIRS). Two expert groups and two non-expert groups were recruited for this research. The data gathering techniques employed in this study were in-depth interviewing, participant observation, searching task experiments, think-aloud protocols, and post-search surveys. Expert users, especially majoring in music theory, preferred to input exact notes one by one using the devices such as keyboard and musical score. On the other hand, non-expert users preferred to input melodic contours by humming.

The Study of Usability Evaluation in the GUI of Mobile Computing - Based on Benchmark Testing in the interface design of WIPI (Mobile Computing의 GUI 개발에 있어 사용성 평가 연구 - WIPI 인터페이스 디자인을 위한 Benchmark Testing을 중심으로 -)

  • 정봉금;송연승
    • Archives of design research
    • /
    • v.17 no.1
    • /
    • pp.49-62
    • /
    • 2004
  • Due to the recent surge of wireless Internet and concurrent development of the end user terminal devices having standardized graphical user interface(GUI) and unified operation mechanism for better interactivity in information representation and ease of use, various efforts on the improvement of GUI is widely recognized as one of the key factors that will usher in the next stages of the wireless Internet for the users. Especially, improved usability along with unique visual effect are considered to be the key elements for GUI considering the rapid improvement of the resolution and color on the end user handset devices; thus, the study and research on the subject of GUI is expected to increase along with the wireless Internet using smart phones. User interface of the wires Internet end user handsets will have a definite and significant effect on the user interaction as well as productivity. Domestically, wireless Internet service providers and GUI design companies are making various efforts in producing a common GUI models for standardized operation scheme and improved graphical display capabilities of the hand phones, PDAs and smart phones. In the study, Nokia 3650 model and Microsoft Orange SPV model were chosen as test devices for usability comparison and data collection to collect directional benchmark data in developing next generation smart phone user interface integrating PDAs and phones. The mail purpose of this study is to achieve the most efficient user accessibility to WAP menu through intensive focus on developing WIPI WAP menu having most effective usability for the users in their twenties and thirties. The result of this study can also be used as the base research materials for WAP service development, VM browser development and PDA browser development. The result of this study along with the evaluation model is expected to provide effective analysis materials on the subject of user interface to the developers of the wireless Internet user devices, GUI designers and service planners while short listing key factors to consider in developing smart phones therefore serving as the GUI guideline of WIPI phones.

  • PDF

Arm Orientation Estimation Method with Multiple Devices for NUI/NUX

  • Sung, Yunsick;Choi, Ryong;Jeong, Young-Sik
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.980-988
    • /
    • 2018
  • Motion estimation is a key Natural User Interface/Natural User Experience (NUI/NUX) technology to utilize motions as commands. HTC VIVE is an excellent device for estimating motions but only considers the positions of hands, not the orientations of arms. Even if the positions of the hands are the same, the meaning of motions can differ according to the orientations of the arms. Therefore, when the positions of arms are measured and utilized, their orientations should be estimated as well. This paper proposes a method for estimating the arm orientations based on the Bayesian probability of the hand positions measured in advance. In experiments, the proposed method was used to measure the hand positions with HTC VIVE. The results showed that the proposed method estimated orientations with an error rate of about 19%, but the possibility of estimating the orientation of any body part without additional devices was demonstrated.

Design of Contactless Gesture-based Rhythm Action Game Interface for Smart Mobile Devices

  • Ju, Da-Young
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.585-591
    • /
    • 2012
  • Objective: The aim of this study is to propose the contactless gesture-based interface on smart mobile devices for especially rhythm action games. Background: Most existing approaches about interactions of smart mobile games are tab on the touch screen. However that way is such undesirable for someone or for sometimes, because of the disabled person, or the inconvenience that users need to touch/tab specific devices. Moreover more importantly, new interaction can derive new possibilities from stranded game genre. Method: In this paper, I present a smart mobile game with contactless gesture-based interaction and the interfaces using computer vision technology. Discovering the gestures which are easy to recognize and research of interaction system that fits to game on smart mobile device are conducted as previous studies. A combination between augmented reality technique and contactless gesture interaction is also tried. Results: The rhythm game allows a user to interact with smart mobile devices using hand gestures, without touching or tabbing the screen. Moreover users can feel fun in the game as other games. Conclusion: Evaluation results show that users make low failure numbers, and the game is able to recognize gestures with quite high precision in real time. Therefore the contactless gesture-based interaction has potentials to smart mobile game. Application: The results are applied to the commercial game application.

Gesture Interaction Design based on User Preference for the Elastic Handheld Device

  • Yoo, Hoon Sik;Ju, Da Young
    • Journal of the Ergonomics Society of Korea
    • /
    • v.35 no.6
    • /
    • pp.519-533
    • /
    • 2016
  • Objective: This study lays its aims at the definition of relevant operation method and function by researching on the value to be brought when applying smart device that can hand carry soft and flexible materials like jelly. Background: New technology and material play a role in bringing type transformation of interface and change of operation system. Recently, importance has been increased on the study of Organic User Interface (OUI) that conducts research on the value of new method of input and output adopting soft and flexible materials for various instruments. Method: For fulfillment of the study, 27 kinds of gestures have been defined that are usable in handheld device based on existing studies. Quantitative research of survey was conducted of adult male and female of 20s through 30s and an analysis was done on the function that can be linked to gestures with highest level of satisfaction. In order to analyze needs and hurdles of users for the defined gesture, a focus group interview was conducted aiming at the groups of early adopters and ordinary users. Results: As a result, it was found that users have much value regarding usability and fun for elastic device and analysis could be conducted on preferred gesture and its linkable functions. Conclusion: What is most significant with this study is that it sheds new light on the values of a device made of elastic material. Beyond finding and defining the gestures and functions that can be applied to a handheld elastic device, the present study identified the value elements of an elastic device - 'usability and 'fun' -, which users can basically desire from using it. Application: The data that this study brought forth through preference and satisfaction test with the gestures and associated functions will help commercialize an elastic device in future.

A Structure of Personalized e-Learning System Using On/Off-line Mixed Estimations Based on Multiple-Choice Items

  • Oh, Yong-Sun
    • International Journal of Contents
    • /
    • v.5 no.1
    • /
    • pp.51-55
    • /
    • 2009
  • In this paper, we present a structure of personalized e-Learning system to study for a test formalized by uniform multiple-choice using on/off line mixed estimations as is the case of Driver :s License Test in Korea. Using the system a candidate can study toward the license through the Internet (and/or mobile instruments) within the personalized concept based on IRT(item response theory). The system accurately estimates user's ability parameter and dynamically offers optimal evaluation problems and learning contents according to the estimated ability so that the user can take possession of the license in shorter time. In order to establish the personalized e-Learning concepts, we build up 3 databases and 2 agents in this system. Content DB maintains learning contents for studying toward the license as the shape of objects separated by concept-unit. Item-bank DB manages items with their parameters such as difficulties, discriminations, and guessing factors, which are firmly related to the learning contents in Content DB through the concept of object parameters. User profile DB maintains users' status information, item responses, and ability parameters. With these DB formations, Interface agent processes user ID, password, status information, and various queries generated by learners. In addition, it hooks up user's item response with Selection & Feedback agent. On the other hand, Selection & Feedback agent offers problems and content objects according to the corresponding user's ability parameter, and re-estimates the ability parameter to activate dynamic personalized learning situation and so forth.

A Deep Learning-based Hand Gesture Recognition Robust to External Environments (외부 환경에 강인한 딥러닝 기반 손 제스처 인식)

  • Oh, Dong-Han;Lee, Byeong-Hee;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.31-39
    • /
    • 2018
  • Recently, there has been active studies to provide a user-friendly interface in a virtual reality environment by recognizing user hand gestures based on deep learning. However, most studies use separate sensors to obtain hand information or go through pre-process for efficient learning. It also fails to take into account changes in the external environment, such as changes in lighting or some of its hands being obscured. This paper proposes a hand gesture recognition method based on deep learning that is strong in external environments without the need for pre-process of RGB images obtained from general webcam. In this paper we improve the VGGNet and the GoogLeNet structures and compared the performance of each structure. The VGGNet and the GoogLeNet structures presented in this paper showed a recognition rate of 93.88% and 93.75%, respectively, based on data containing dim, partially obscured, or partially out-of-sight hand images. In terms of memory and speed, the GoogLeNet used about 3 times less memory than the VGGNet, and its processing speed was 10 times better. The results of this paper can be processed in real-time and used as a hand gesture interface in various areas such as games, education, and medical services in a virtual reality environment.