• 제목/요약/키워드: Pointing Interface

검색결과 46건 처리시간 0.029초

가상환경에서의 3차원 포인팅작업 성능평가 모형 (Performance estimation model of the three-dimensional pointing tasks in virtual environment systems)

  • 박재희;박경수
    • 대한인간공학회:학술대회논문집
    • /
    • 대한인간공학회 1996년도 춘계학술대회논문집
    • /
    • pp.253-258
    • /
    • 1996
  • Virtual reality environment system is expected to be used as a new user interface tool oweing to its high immersiveness and high interactivity. To use VR interface effectively, we should identify the characteristics of the three-dimensional control tasks as if we did in two-dimensional graphic user interface environments. As a first step, we validated Fitts'law for the three-dimensional pointing tasks with the two input devices, Spaceball and Spacemouse. Different from the two-dimensional control tasks, VR pointing tasks needed inclusion of a new variable, size of the moving object, to Fitts'law. The modified

  • PDF

웨어러블 컴퓨터 환경에서 포인팅 인터페이스의 동적 이득 방법의 효율성 평가 (Evaluating Performance of Pointing Interface of Dynamic Gain Control in Wearable Computing Environment)

  • 홍지영;채행석;한광희
    • 대한인간공학회지
    • /
    • 제26권4호
    • /
    • pp.9-16
    • /
    • 2007
  • Input devices of wearable computer are difficult to use, so a lot of alternative pointing devices have been considered in recent years. In order to resolve this problem, this paper proposed a dynamic gain control method which is able to improve the performance of wearable pointing device and showed an experimental result comparing this method with conventional method. Also the effects of methods were compared in terms of device (wearable and desktop). The result of calculating throughputs(index of performance) by Fitts' law showed that the pointing performance in dynamic gain condition was improved 1.4 times more than normal gain.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권4호
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Design and Implementation of a Real-time Region Pointing System using Arm-Pointing Gesture Interface in a 3D Environment

  • Han, Yun-Sang;Seo, Yung-Ho;Doo, Kyoung-Soo;Choi, Jong-Soo
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.290-293
    • /
    • 2009
  • In this paper, we propose a method to estimate pointing region in real-world from images of cameras. In general, arm-pointing gesture encodes a direction which extends from user's fingertip to target point. In the proposed work, we assume that the pointing ray can be approximated to a straight line which passes through user's face and fingertip. Therefore, the proposed method extracts two end points for the estimation of pointing direction; one from the user's face and another from the user's fingertip region. Then, the pointing direction and its target region are estimated based on the 2D-3D projective mapping between camera images and real-world scene. In order to demonstrate an application of the proposed method, we constructed an ICGS (interactive cinema guiding system) which employs two CCD cameras and a monitor. The accuracy and robustness of the proposed method are also verified on the experimental results of several real video sequences.

  • PDF

자동차 Instrument Panel 의 운전자 인지지도 추출을 위한 Blind-Pointing Method 개발에 관한 연구 (A study for the development of Blind-Pointing Method to extract drivers' cognitive map on Instrument Panel)

  • 유승동;박범
    • 대한산업공학회지
    • /
    • 제26권1호
    • /
    • pp.9-16
    • /
    • 2000
  • In these days, the interior interface design for vehicle drivers was recognized as important affairs. Thereby, many studies are being performed for this. These studies emphasize the physical factors and usability of human, but those for the cognitive factors are not enough. Cognitive factors are very important elements to determine the drivers' performance. In this study, it was studied about the method to extract a driver's cognitive map on IP(Instrument Panel) in dynamic situation, and BPM(Blind-Pointing Method) was proposed for this. The BPM is the method to extract a cognitive map by subject's pointing action under the blinded condition. The experiment was conducted to validate compatibility of BPM as the method to extract a cognitive map. In the experiment, subjects were divided in two groups, the first group of subjects has their own vehicle and driver license, and the second group of subjects doesn't have own vehicle but has driver license. The result shows that the IP form of cognitive map is not different between two groups, and BPM is the compatible method to extract a cognitive map.

  • PDF

iTV 리모컨 포인팅 디바이스 수행도 측정 (User Performance Measures for iTV Remote Control of Pointing Devices)

  • 정경균;김인수;박승권;박용진
    • 한국인터넷방송통신학회논문지
    • /
    • 제11권3호
    • /
    • pp.145-150
    • /
    • 2011
  • 본 연구는 Interactive TV (iTV)에 적용될 수 있는 시험용 Remote Pointing Device에 대한 수행도 평가를 수행하였다. 최근 업계에서 시도되고 있는 3개의 서로 다른 인터페이스 방식의 포인팅 디바이스를 대상으로 경험적 실험을 실시하였다. 측정방법으로 기존의 ISO 9241-Part 9에 명시된 직선궤도상의 Directional Task수행하였고, 측정변수로는 움직임 시간, 정확도, 효율성, 그리고 주관적 만족도를 평가하였다. 연구결과, 움직임 시간은 GyroPoint 방식이 다른 Device에 비해 높은 수행도를 보였으나, 정확도는 Hall Mouse와 OFN방식에 비해 낮은 결과를 보였다. 그리고 효율성과 주관적 만족도는 각 입력장치에 따라 차이가 있는 것으로 분석되었다. 본 연구는 최근 시도되고 있는 주요 포인팅 장치에 대한 연구로 문제점 및 특징을 분석함으로써 iTV환경에 보다 적합한 리모컨 설계에 유용한 자료를 제공할 것으로 기대된다.

주시안과 검지 끝 점을 이용한 3차원 물리 사용자 인터페이스 시스템 (3D Physical User Interface System using a Dominant Eye and an Index Fingertip)

  • 김경호;안지윤;이종배;권희용
    • 한국멀티미디어학회논문지
    • /
    • 제16권2호
    • /
    • pp.138-146
    • /
    • 2013
  • 본 논문에서는 검지 끝으로 모니터 스크린 평면상의 마우스 위치를 가리키고 이동시키는 새로운 3차원 물리 사용자 인터페이스 방식(PUI)을 제안한다. 스마트 TV와 같은 각종 스마트 기기를 원격으로 제어하는 기존의 3D PUI 방식에는 상대적 지시 방식과 절대적 지시 방식의 두 가지가 있다. 전자의 경우는 사람의 인식과정과 불일치하며, 후자는 과도한 몸의 이동이 필요한 문제가 있었다. 이에 본 연구에서는 상대적인 포인팅 방식과 절대적인 포인팅 방식을 결합하여 직관적이고 사용자 중심의 인간 친화적인 3D PUI 포인팅 방식을 개발하였다. 제안한 방법은 스크린영역 내에서의 포인팅을 위해서 주시안을 기준으로 사각뿔 모양의 가시영역을 설정할 필요가 있다. 가시영역을 실시간으로 유지하기 위해서는 주시안의 움직임에 따라 많은 계산량을 요구한다. 본 논문에서는 스크린 상의 검지 끝 투영 위치좌표의 내외부를 판정하는 방식으로 가시영역 계산량을 최적화 하였다. 아울러 포인터 위치좌표 추적에 칼만필터를 적용하여 마우스 커서의 떨림을 안정화시켜 사용자의 편의성을 높였다.

제스쳐 클리핑 영역 비율과 크기 변화를 이용한 손-동작 인터페이스 구현 (Implement of Hand Gesture Interface using Ratio and Size Variation of Gesture Clipping Region)

  • 최창열;이우범
    • 한국인터넷방송통신학회논문지
    • /
    • 제13권1호
    • /
    • pp.121-127
    • /
    • 2013
  • 본 논문에서는 UI 시스템에서 포인팅 장비를 대신할 수 있는 컴퓨터 비전 기반의 제스쳐 형상의 영역 비율과 크기 변화 특징을 이용한 손-동작 인터페이스를 제안한다. 제안한 방법은 효과적인 손 영역 검출을 위해서 HSI 컬러 모델을 기반으로 손 영역의 피부 색상(Hue)과 채도(Saturation) 값을 혼합하여 적용함으로서 제스쳐 인식에 있어서 손 영역 이외의 피부색 영역을 제거할 수 있으며, 빛에 의한 잡음 영향을 줄이는데 효과적이다. 또한 제시되는 제스쳐의 정적인 포즈 인식이 아닌 실시간으로 변화하는 제스쳐 클리핑 영역에서의 손 영역 화소 비율과 크기 변화를 검출함으로써 계산량을 줄일 수 있으며, 보다 빠른 응답 속도를 보장한다. 제안한 컴퓨터 비전 기반의 포인팅 인터페이스는 우리가 이전 연구에서 구현한 자가 시력 측정 시스템에서 독립적인 포인팅 인터페이스로 적용한 결과, 평균적으로 86%의 제스쳐 인식률과 87%의 좌표이동 인식률을 보여 포인팅 인터페이스로의 활용도를 보였다.

상지장애인을 위한 시선 인터페이스에서의 객체 확대 및 음성 명령 인터페이스 개발 (Object Magnification and Voice Command in Gaze Interface for the Upper Limb Disabled)

  • 박주현;조세란;임순범
    • 한국멀티미디어학회논문지
    • /
    • 제24권7호
    • /
    • pp.903-912
    • /
    • 2021
  • Eye tracking research for upper limb disabilities is showing an effect in the aspect of device control. However, the reality is that it is not enough to perform web interaction with only eye tracking technology. In the Eye-Voice interface, a previous study, in order to solve the problem that the existing gaze tracking interfaces cause a malfunction of pointer execution, a gaze tracking interface supplemented with a voice command was proposed. In addition, the reduction of the malfunction rate of the pointer was confirmed through a comparison experiment with the existing interface. In this process, the difficulty of pointing due to the small size of the execution object in the web environment was identified as another important problem of malfunction. In this study, we propose an auto-magnification interface of objects so that people with upper extremities can freely click web contents by improving the problem that it was difficult to point and execute due to the high density of execution objects and their arrangements in web pages.

The Development of a Haptic Interface for Interacting with BIM Elements in Mixed Reality

  • Cho, Jaehong;Kim, Sehun;Kim, Namyoung;Kim, Sungpyo;Park, Chaehyeon;Lim, Jiseon;Kang, Sanghyeok
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.1179-1186
    • /
    • 2022
  • Building Information Modeling (BIM) is widely used to efficiently share, utilize and manage information generated in every phase of a construction project. Recently, mixed reality (MR) technologies have been introduced to more effectively utilize BIM elements. This study deals with the haptic interactions between humans and BIM elements in MR to improve BIM usability. As the first step in interacting with virtual objects in mixed reality, we challenged moving a virtual object to the desired location using finger-pointing. This paper presents the procedure of developing a haptic interface system where users can interact with a BIM object to move it to the desired location in MR. The interface system consists of an MR-based head-mounted display (HMD) and a mobile application developed using Unity 3D. This study defined two segments to compute the scale factor and rotation angle of the virtual object to be moved. As a result of testing a cuboid, the user can successfully move it to the desired location. The developed MR-based haptic interface can be used for aligning BIM elements overlaid in the real world at the construction site.

  • PDF