• Title/Summary/Keyword: Pointing Interface

Search Result 46, Processing Time 0.022 seconds

Performance estimation model of the three-dimensional pointing tasks in virtual environment systems (가상환경에서의 3차원 포인팅작업 성능평가 모형)

  • 박재희;박경수
    • Proceedings of the ESK Conference
    • /
    • 1996.04a
    • /
    • pp.253-258
    • /
    • 1996
  • Virtual reality environment system is expected to be used as a new user interface tool oweing to its high immersiveness and high interactivity. To use VR interface effectively, we should identify the characteristics of the three-dimensional control tasks as if we did in two-dimensional graphic user interface environments. As a first step, we validated Fitts'law for the three-dimensional pointing tasks with the two input devices, Spaceball and Spacemouse. Different from the two-dimensional control tasks, VR pointing tasks needed inclusion of a new variable, size of the moving object, to Fitts'law. The modified

  • PDF

Evaluating Performance of Pointing Interface of Dynamic Gain Control in Wearable Computing Environment (웨어러블 컴퓨터 환경에서 포인팅 인터페이스의 동적 이득 방법의 효율성 평가)

  • Hong, Ji-Young;Chae, Haeng-Suk;Han, Kwang-Hee
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.4
    • /
    • pp.9-16
    • /
    • 2007
  • Input devices of wearable computer are difficult to use, so a lot of alternative pointing devices have been considered in recent years. In order to resolve this problem, this paper proposed a dynamic gain control method which is able to improve the performance of wearable pointing device and showed an experimental result comparing this method with conventional method. Also the effects of methods were compared in terms of device (wearable and desktop). The result of calculating throughputs(index of performance) by Fitts' law showed that the pointing performance in dynamic gain condition was improved 1.4 times more than normal gain.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

A study for the development of Blind-Pointing Method to extract drivers' cognitive map on Instrument Panel (자동차 Instrument Panel 의 운전자 인지지도 추출을 위한 Blind-Pointing Method 개발에 관한 연구)

  • Yu, Seung-Dong;Park, Peom
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.26 no.1
    • /
    • pp.9-16
    • /
    • 2000
  • In these days, the interior interface design for vehicle drivers was recognized as important affairs. Thereby, many studies are being performed for this. These studies emphasize the physical factors and usability of human, but those for the cognitive factors are not enough. Cognitive factors are very important elements to determine the drivers' performance. In this study, it was studied about the method to extract a driver's cognitive map on IP(Instrument Panel) in dynamic situation, and BPM(Blind-Pointing Method) was proposed for this. The BPM is the method to extract a cognitive map by subject's pointing action under the blinded condition. The experiment was conducted to validate compatibility of BPM as the method to extract a cognitive map. In the experiment, subjects were divided in two groups, the first group of subjects has their own vehicle and driver license, and the second group of subjects doesn't have own vehicle but has driver license. The result shows that the IP form of cognitive map is not different between two groups, and BPM is the compatible method to extract a cognitive map.

  • PDF

User Performance Measures for iTV Remote Control of Pointing Devices (iTV 리모컨 포인팅 디바이스 수행도 측정)

  • Cheong, Kyeong-Kyun;Kim, In-Soo;Park, Sung-Kwon;Park, Yong-Jin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.145-150
    • /
    • 2011
  • In this study, we assessed the performance of the test remote pointing device that is applicable to interactive TV (iTV). An empirical test was carried out with the pointing devices in three different interface methods that are recently tried in the industry. For the measurement, a directional task was performed on the linear track described in the conventional ISO 9241-Part 9. The measured variables included the movement time, accuracy, throughput and the subjective satisfaction. The result showed that the motion time was better in the device in the GyroPoint method than the other devices, but the accuracy of the GyroPoint device was lower than that of the devices in the Hall Mouse and OFN methods. It was found that the throughput and the subjective satisfaction were dependent upon the individual input devices. Since this study was conducted on the major point devices that are currently tried and analyzed the problems and characteristics, it is assumed that this study will provide useful data for the design of the remote pointing device that is more suitable to the iTV environment.

3D Physical User Interface System using a Dominant Eye and an Index Fingertip (주시안과 검지 끝 점을 이용한 3차원 물리 사용자 인터페이스 시스템)

  • Kim, Kyung-Ho;Ahn, Jeeyun;Lee, Jongbae;Kwon, Heeyong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.2
    • /
    • pp.138-146
    • /
    • 2013
  • In this paper, we propose a new 3D PUI(Physical User Interface) system in which the index fingertip points and moves a mouse position on a given monitor screen. There are two 3D PUI schemes to control smart devices like smart TVs remotely, the relative pointing one and the absolute pointing one. The former has a problem in that it does not match the human perception process, and the latter requires excessive movement of the body. We combined the relative one and the absolute one, and develop a new intuitive and user-friendly pointing method, 3D PUI. It requires an establishment of a pyramid shape visible area (view volume) to point a mouse position on a screen with the dominant eye. In order to maintain the real-time view volume, however, it requires large computation depending on the movement of the dominant eye. We optimized the computation of the view volume in which it determines the internal and external position on the screen. In addition, Kalman filter is applied with tracing of the mouse pointer position to stabilize the trembling of the pointer and offers the user ease of use.

Implement of Hand Gesture Interface using Ratio and Size Variation of Gesture Clipping Region (제스쳐 클리핑 영역 비율과 크기 변화를 이용한 손-동작 인터페이스 구현)

  • Choi, Chang-Yur;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.121-127
    • /
    • 2013
  • A vision based hand-gesture interface method for substituting a pointing device is proposed in this paper, which is used the ratio and size variation of Gesture Region. Proposed method uses the skin hue&saturation of the hand region from the HSI color model to extract the hand region effectively. This method can remove the non-hand region, and reduces the noise effect by the light source. Also, as the computation quantity is reduced by detecting not the static hand-shape recognition, but the ratio and size variation of hand-moving from the clipped hand region in real time, more response speed is guaranteed. In order to evaluate the performance of the our proposed method, after applying to the computerized self visual acuity testing system as a pointing device. As a result, the proposed method showed the average 86% gesture recognition ratio and 87% coordinate moving recognition ratio.

Object Magnification and Voice Command in Gaze Interface for the Upper Limb Disabled (상지장애인을 위한 시선 인터페이스에서의 객체 확대 및 음성 명령 인터페이스 개발)

  • Park, Joo Hyun;Jo, Se-Ran;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.903-912
    • /
    • 2021
  • Eye tracking research for upper limb disabilities is showing an effect in the aspect of device control. However, the reality is that it is not enough to perform web interaction with only eye tracking technology. In the Eye-Voice interface, a previous study, in order to solve the problem that the existing gaze tracking interfaces cause a malfunction of pointer execution, a gaze tracking interface supplemented with a voice command was proposed. In addition, the reduction of the malfunction rate of the pointer was confirmed through a comparison experiment with the existing interface. In this process, the difficulty of pointing due to the small size of the execution object in the web environment was identified as another important problem of malfunction. In this study, we propose an auto-magnification interface of objects so that people with upper extremities can freely click web contents by improving the problem that it was difficult to point and execute due to the high density of execution objects and their arrangements in web pages.

The Development of a Haptic Interface for Interacting with BIM Elements in Mixed Reality

  • Cho, Jaehong;Kim, Sehun;Kim, Namyoung;Kim, Sungpyo;Park, Chaehyeon;Lim, Jiseon;Kang, Sanghyeok
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1179-1186
    • /
    • 2022
  • Building Information Modeling (BIM) is widely used to efficiently share, utilize and manage information generated in every phase of a construction project. Recently, mixed reality (MR) technologies have been introduced to more effectively utilize BIM elements. This study deals with the haptic interactions between humans and BIM elements in MR to improve BIM usability. As the first step in interacting with virtual objects in mixed reality, we challenged moving a virtual object to the desired location using finger-pointing. This paper presents the procedure of developing a haptic interface system where users can interact with a BIM object to move it to the desired location in MR. The interface system consists of an MR-based head-mounted display (HMD) and a mobile application developed using Unity 3D. This study defined two segments to compute the scale factor and rotation angle of the virtual object to be moved. As a result of testing a cuboid, the user can successfully move it to the desired location. The developed MR-based haptic interface can be used for aligning BIM elements overlaid in the real world at the construction site.

  • PDF