• Title/Summary/Keyword: 시선기반 사용자 인터페이스

Search Result 15, Processing Time 0.021 seconds

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

The Virtual Model House System using Modeling-based Eyetracking (모델링 기반 시선추적을 이용한 가상모델하우스 시스템)

  • Lee, Dong-Jin;Lee, Ki-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.223-227
    • /
    • 2010
  • Since the most of existing virtual model house used non-immersive type virtual reality technology, it was made to control using direct input type device such as keyboard and mouse. But in this paper realized not only direct data entry method but also indirect data entry method using eyetracking technology through universal webcam for virtual model house based upon modeling. In this paper showed the position of pointer controlled according to the relative movement of pupils or the part of function related to the equipment entry called by comparing the value of gray after extracting the area of pupil in the user' video data received from webcam. Such a method provided the convenience of navigation and interface to the user.

3D View Controlling by Using Eye Gaze Tracking in First Person Shooting Game (1 인칭 슈팅 게임에서 눈동자 시선 추적에 의한 3차원 화면 조정)

  • Lee, Eui-Chul;Cho, Yong-Joo;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.10
    • /
    • pp.1293-1305
    • /
    • 2005
  • In this paper, we propose the method of manipulating the gaze direction of 3D FPS game's character by using eye gaze detection from the successive images captured by USB camera, which is attached beneath HMD. The proposed method is composed of 3 parts. In the first fart, we detect user's pupil center by real-time image processing algorithm from the successive input images. In the second part of calibration, the geometric relationship is determined between the monitor gazing position and the detected eye position gazing at the monitor position. In the last fart, the final gaze position on the HMB monitor is tracked and the 3D view in game is control]ed by the gaze position based on the calibration information. Experimental results show that our method can be used for the handicapped game player who cannot use his (or her) hand. Also, it can increase the interest and immersion by synchronizing the gaze direction of game player and that of game character.

  • PDF

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

A Study on Virtual Reality Techniques for Immersive Traditional Fairy Tale Contents Production (몰입형 전래동화 콘텐츠 제작을 위한 가상현실 기술에 대한 연구)

  • Jeong, Kisung;Han, Seunghun;Lee, Dongkyu;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.43-52
    • /
    • 2016
  • This paper is to study techniques of a virtual reality to maximize the depth of the users' immersion based on differentiated interactive contents using korean traditional fairy tale. In order to increase more interests in korean traditional fairy tale, we produce a interactive 3D contents and propose a new approach to a system designing applying a virtual realities such as HMD, Leap motion. First, using Korean traditional fairy tale, we generate interactive contents consisting of scenes intensifying user's tensions while interaction of game process. Based on the interactive contents generated, we design scene generation using Oculus HMD, the gaze based input processes and a hand interface using Leap motion, in order to provide a multi dimensional scene transmission and an input process method to intensify the sense of the reality. We will verify through diverse tests whether the proposed virtual reality contents based on a technique of an input process will actually intensify the immersion in the virtual reality or not while minimizing the motion sickness of the users.