• Title/Summary/Keyword: hand interface

Search Result 601, Processing Time 0.046 seconds

A Study on Force-Reflecting Interface using Ultrasonic Motros (초음파모터를 이용한 역감장치에 관한 연구)

  • 강원찬;김대현;김영동
    • Proceedings of the KIPE Conference
    • /
    • 1998.07a
    • /
    • pp.123-128
    • /
    • 1998
  • This paper describes the evaluation of a force-reflecting interface with ultrasonic motors(USMs). The force-reflecting interface allows a human to feel object within virtual environment. To effectively display the mechanical impedance of the human hand we need a haptic device with specific characteristics, such as low inertia, almost zero friction and very high stiffness. USMs have attracted considerable attention as the actuator satisfied these conditions. USMs combine features such as high driving torque at low rotational speed, high holding torque and fast response therefore we studied two degree of freedom force-reflecting haptic system.

  • PDF

Development of a Hand~posture Recognition System Using 3D Hand Model (3차원 손 모델을 이용한 비전 기반 손 모양 인식기의 개발)

  • Jang, Hyo-Young;Bien, Zeung-Nam
    • Proceedings of the KIEE Conference
    • /
    • 2007.04a
    • /
    • pp.219-221
    • /
    • 2007
  • Recent changes to ubiquitous computing requires more natural human-computer(HCI) interfaces that provide high information accessibility. Hand-gesture, i.e., gestures performed by one 'or two hands, is emerging as a viable technology to complement or replace conventional HCI technology. This paper deals with hand-posture recognition. Hand-posture database construction is important in hand-posture recognition. Human hand is composed of 27 bones and the movement of each joint is modeled by 23 degrees of freedom. Even for the same hand-posture,. grabbed images may differ depending on user's characteristic and relative position between the hand and cameras. To solve the difficulty in defining hand-postures and construct database effective in size, we present a method using a 3D hand model. Hand joint angles for each hand-posture and corresponding silhouette images from many viewpoints by projecting the model into image planes are used to construct the ?database. The proposed method does not require additional equations to define movement constraints of each joint. Also using the method, it is easy to get images of one hand-posture from many vi.ewpoints and distances. Hence it is possible to construct database more precisely and concretely. The validity of the method is evaluated by applying it to the hand-posture recognition system.

  • PDF

Hand Haptic Interface for Intuitive 3D Interaction (직관적인 3D 인터랙션을 위한 핸드 햅틱 인터페이스)

  • Jang, Yong-Seok;Kim, Yong-Wan;Son, Wook-Ho;Kim, Kyung-Hwan
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.53-59
    • /
    • 2007
  • Several researches in 3D interaction have identified and extensively studied the four basic interaction tasks for 3D/VE applications, namely, navigation, selection, manipulation and system control. These interaction schemes in the real world or VE are generally suitable for interacting with small graspable objects. In some applications, it is important to duplicate real world behavior. For example, a training system for a manual assembly task and usability verification system benefits from a realistic system for object grasping and manipulation. However, it is not appropriate to instantly apply these interaction technologies to such applications, because the quality of simulated grasping and manipulation has been limited. Therefore, we introduce the intuitive and natural 3D interaction haptic interface supporting high-precision hand operations and realistic haptic feedback.

  • PDF

RealBook: A Tangible Electronic Book Based on the Interface of TouchFace-V (RealBook: TouchFace-V 인터페이스 기반 실감형 전자책)

  • Song, Dae-Hyeon;Bae, Ki-Tae;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.551-559
    • /
    • 2013
  • In this paper, we proposed a tangible RealBook based on the interface of TouchFace-V which is able to recognize multi-touch and hand gesture. The TouchFace-V is applied projection technology on a flat surface such as table, without constraint of space. The system's configuration is addressed installation, calibration, and portability issues that are most existing front-projected vision-based tabletop display. It can provide hand touch and gesture applying computer vision by adopting tracking technology without sensor and traditional input device. The RealBook deals with the combination of each advantage of analog sensibility on texts and multimedia effects of e-book. Also, it provides digitally created stories that would differ in experiences and environments with interacting users' choices on the interface of the book. We proposed e-book that is new concept of electronic book; named RealBook, different from existing and TouchFace-V interface, which can provide more direct viewing, natural and intuitive interactions with hand touch and gesture.

Vision based 3D Hand Interface Using Virtual Two-View Method (가상 양시점화 방법을 이용한 비전기반 3차원 손 인터페이스)

  • Bae, Dong-Hee;Kim, Jin-Mo
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.43-54
    • /
    • 2013
  • With the consistent development of the 3D application technique, visuals are available at more realistic quality and are utilized in many applications like game. In particular, interacting with 3D objects in virtual environments, 3D graphics have led to a substantial development in the augmented reality. This study proposes a 3D user interface to control objects in 3D space through virtual two-view method using only one camera. To do so, homography matrix including transformation information between arbitrary two positions of camera is calculated and 3D coordinates are reconstructed by employing the 2D hand coordinates derived from the single camera, homography matrix and projection matrix of camera. This method will result in more accurate and quick 3D information. This approach may be advantageous with respect to the reduced amount of calculation needed for using one camera rather than two and may be effective at the same time for real-time processes while it is economically efficient.

A Study on the Eye-Hand Coordination for Korean Text Entry Interface Development (한글 문자 입력 인터페이스 개발을 위한 눈-손 Coordination에 대한 연구)

  • Kim, Jung-Hwan;Hong, Seung-Kweon;Myung, Ro-Hae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.149-155
    • /
    • 2007
  • Recently, various devices requiring text input such as mobile phone IPTV, PDA and UMPC are emerging. The frequency of text entry for them is also increasing. This study was focused on the evaluation of Korean text entry interface. Various models to evaluate text entry interfaces have been proposed. Most of models were based on human cognitive process for text input. The cognitive process was divided into two components; visual scanning process and finger movement process. The time spent for visual scanning process was modeled as Hick-Hyman law, while the time for finger movement was determined as Fitts' law. There are three questions on the model-based evaluation of text entry interface. Firstly, are human cognitive processes (visual scanning and finger movement) during the entry of text sequentially occurring as the models. Secondly, is it possible to predict real text input time by previous models. Thirdly, does the human cognitive process for text input vary according to users' text entry speed. There was time gap between the real measured text input time and predicted time. The time gap was larger in the case of participants with high speed to enter text. The reason was found out investigating Eye-Hand Coordination during text input process. Differently from an assumption that visual scan on the keyboard is followed by a finger movement, the experienced group performed both visual scanning and finger movement simultaneously. Arrival Lead Time was investigated to measure the extent of time overlapping between two processes. 'Arrival Lead Time' is the interval between the eye fixation on the target button and the button click. In addition to the arrival lead time, it was revealed that the experienced group uses the less number of fixations during text entry than the novice group. This result will contribute to the improvement of evaluation model for text entry interface.

Robust Control of a Haptic Interface Using LQG/LTR (LQG/LTR을 이용한 Haptic Interface의 강인제어)

  • Lee, Sang-Cheol;Park, Heon;Lee, Su-Sung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.757-763
    • /
    • 2002
  • A newly designed haptic interface enables an operator to control a remote robot precisely. It transmits position information to the remote robot and feeds back the interaction force from it. A control algorithm of haptic interface has been studied to improve the robustness and stability to uncertain dynamic environments with a proposed contact dynamic model that incorporates human hand dynamics. A simplified hybrid parallel robot dynamic model fur a 6 DOF haptic device was proposed to from a real time control system, which does not include nonlinear components. LQC/LTR scheme was adopted in this paper for the compensation of un-modeled dynamics. The recovery of the farce from the remote robot at the haptic interface was demonstrated through the experiments.

Multimodal Interface Based on Novel HMI UI/UX for In-Vehicle Infotainment System

  • Kim, Jinwoo;Ryu, Jae Hong;Han, Tae Man
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.793-803
    • /
    • 2015
  • We propose a novel HMI UI/UX for an in-vehicle infotainment system. Our proposed HMI UI comprises multimodal interfaces that allow a driver to safely and intuitively manipulate an infotainment system while driving. Our analysis of a touchscreen interface-based HMI UI/UX reveals that a driver's use of such an interface while driving can cause the driver to be seriously distracted. Our proposed HMI UI/UX is a novel manipulation mechanism for a vehicle infotainment service. It consists of several interfaces that incorporate a variety of modalities, such as speech recognition, a manipulating device, and hand gesture recognition. In addition, we provide an HMI UI framework designed to be manipulated using a simple method based on four directions and one selection motion. Extensive quantitative and qualitative in-vehicle experiments demonstrate that the proposed HMI UI/UX is an efficient mechanism through which to manipulate an infotainment system while driving.

EEG Analysis Following Change in Hand Grip Force Level for BCI Based Robot Arm Force Control (BCI 기반 로봇 손 제어를 위한 악력 변화에 따른 EEG 분석)

  • Kim, Dong-Eun;Lee, Tae-Ju;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.2
    • /
    • pp.172-177
    • /
    • 2013
  • With Brain Computer Interface (BCI) system, a person with disabled limb could use this direct brain signal like electroencephalography (EEG) to control a device such as the artifact arm. The precise force control for the artifact arm is necessary for this artificial limb system. To understand the relationship between control EEG signal and the gripping force of hands, We proposed a study by measuring EEG changes of three grades (25%, 50%, 75%) of hand grip MVC (Maximal Voluntary Contract). The acquired EEG signal was filtered to obtain power of three wave bands (alpha, beta, gamma) by using fast fourier transformation (FFT) and computed power spectrum. Then the power spectrum of three bands (alpha, beta and gamma) of three classes (MVC 25%, 50%, 75%) was classified by using PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The result showed that the power spectrum of EEG is increased at MVC 75% more than MVC 25%, and the correct classification rate was 52.03% for left hand and 77.7% for right hand.

Intuitive Spatial Drawing System based on Hand Interface (손 인터페이스 기반 직관적인 공간 드로잉 시스템)

  • Ko, Ginam;Kim, Serim;Kim, YoungEun;Nam, SangHun
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1615-1620
    • /
    • 2017
  • The development of Virtual Reality (VR)-related technologies has resulted in the improved performance of VR devices as well as affordable price arrangements, granting many users easy access to VR technology. VR drawing applications are not complicated for users and are also highly mature, being used for education, performances, and more. For controller-based spatial drawing interfaces, the user's drawing interface becomes constrained by the controller. This study proposes hand interaction based spatial drawing system where the user, who has never used the controller before, can intuitively use the drawing application by mounting LEAP Motion at the front of the Head Mounted Display (HMD). This traces the motion of the user's hand in front of the HMD to draw curved surfaces in virtual environments.