• Title/Summary/Keyword: 3D gaze

Search Result 50, Processing Time 0.022 seconds

Evaluation of Gaze Depth Estimation using a Wearable Binocular Eye tracker and Machine Learning (착용형 양안 시선추적기와 기계학습을 이용한 시선 초점 거리 추정방법 평가)

  • Shin, Choonsung;Lee, Gun;Kim, Youngmin;Hong, Jisoo;Hong, Sung-Hee;Kang, Hoonjong;Lee, Youngho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.19-26
    • /
    • 2018
  • In this paper, we propose a gaze depth estimation method based on a binocular eye tracker for virtual reality and augmented reality applications. The proposed gaze depth estimation method collects a wide range information of each eye from the eye tracker such as the pupil center, gaze direction, inter pupil distance. It then builds gaze estimation models using Multilayer perceptron which infers gaze depth with respect to the eye tracking information. Finally, we evaluated the gaze depth estimation method with 13 participants in two ways: the performance based on their individual models and the performance based on the generalized model. Through the evaluation, we found that the proposed estimation method recognized gaze depth with 90.1% accuracy for 13 individual participants and with 89.7% accuracy for including all participants.

A Study on Gaze Tracking Based on Pupil Movement, Corneal Specular Reflections and Kalman Filter (동공 움직임, 각막 반사광 및 Kalman Filter 기반 시선 추적에 관한 연구)

  • Park, Kang-Ryoung;Ko, You-Jin;Lee, Eui-Chul
    • The KIPS Transactions:PartB
    • /
    • v.16B no.3
    • /
    • pp.203-214
    • /
    • 2009
  • In this paper, we could simply compute the user's gaze position based on 2D relations between the pupil center and four corneal specular reflections formed by four IR-illuminators attached on each corner of a monitor, without considering the complex 3D relations among the camera, the monitor, and the pupil coordinates. Therefore, the objectives of our paper are to detect the pupil center and four corneal specular reflections exactly and to compensate for error factors which affect the gaze accuracy. In our method, we compensated for the kappa error between the calculated gaze position through the pupil center and actual gaze vector. We performed one time user calibration to compensate when the system started. Also, we robustly detected four corneal specular reflections that were important to calculate gaze position based on Kalman filter irrespective of the abrupt change of eye movement. Experimental results showed that the gaze detection error was about 1.0 degrees though there was the abrupt change of eye movement.

A Simple Eye Gaze Correction Scheme Using 3D Affine Transformation and Image In-painting Technique

  • Ko, Eunsang;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • v.5 no.2
    • /
    • pp.83-86
    • /
    • 2018
  • Owing to high speed internet technologies, video conferencing systems are exploited in our home as well as work places using a laptop or a webcam. Although eye contact in the video conferencing system is significant, most systems do not support good eye contact due to improper locations of cameras. Several ideas have been proposed to solve the eye contact problem; however, some of them require complicated hardware configurations and expensive customized hardwares. In this paper, we propose a simple eye gaze correction method using the three-dimensional (3D) affine transformation. We also apply an image in-painting method to fill empty holes that are caused by round-off errors from the coordinate transformation. From experiments, we obtained visually improved results.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Analysis of User's Eye Gaze Distribution while Interacting with a Robotic Character (로봇 캐릭터와의 상호작용에서 사용자의 시선 배분 분석)

  • Jang, Seyun;Cho, Hye-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.74-79
    • /
    • 2019
  • In this paper, we develop a virtual experimental environment to investigate users' eye gaze in human-robot social interaction, and verify it's potential for further studies. The system consists of a 3D robot character capable of hosting simple interactions with a user, and a gaze processing module recording which body part of the robot character, such as eyes, mouth or arms, the user is looking at, regardless of whether the robot is stationary or moving. To verify that the results acquired on this virtual environment are aligned with those of physically existing robots, we performed robot-guided quiz sessions with 120 participants and compared the participants' gaze patterns with those in previous works. The results included the followings. First, when interacting with the robot character, the user's gaze pattern showed similar statistics as the conversations between humans. Second, an animated mouth of the robot character received longer attention compared to the stationary one. Third, nonverbal interactions such as leakage cues were also effective in the interaction with the robot character, and the correct answer ratios of the cued groups were higher. Finally, gender differences in the users' gaze were observed, especially in the frequency of the mutual gaze.

Pilot Gaze Tracking and ILS Landing Result Analysis using VR HMD based Flight Simulators (VR HMD 시뮬레이터를 활용한 조종사 시선 추적 및 착륙 절차 결과 분석)

  • Jeong, Gu Moon;Lee, Youngjae;Kwag, TaeHo;Lee, Jae-Woo
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.30 no.1
    • /
    • pp.44-49
    • /
    • 2022
  • This study performed precision instrument landing procedures for pilots with a commercial pilot license using VR HMD flight simulators, and assuming that the center of the pilot's gaze is in the front, 3-D.O.F. head tracking data and 2-D eye tracking of VR HMD worn by pilots gaze tracking was performed through. After that, AOI (Area of Interesting) was set for the instrument panel and external field of view of the cockpit to analyze how the pilot's gaze was distributed before and after the decision altitude. At the same time, the landing results were analyzed using the Localizer and G/S data as the pilot's precision instrument landing flight data. As a result, the pilot was quantitatively evaluated by reflecting the gaze tracking and the resulting landing result using a VR HMD simulator.

Wearable Robot System Enabling Gaze Tracking and 3D Position Acquisition for Assisting a Disabled Person with Disabled Limbs (시선위치 추적기법 및 3차원 위치정보 획득이 가능한 사지장애인 보조용 웨어러블 로봇 시스템)

  • Seo, Hyoung Kyu;Kim, Jun Cheol;Jung, Jin Hyung;Kim, Dong Hwan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.10
    • /
    • pp.1219-1227
    • /
    • 2013
  • A new type of wearable robot is developed for a disabled person with disabled limbs, that is, a person who cannot intentionally move his/her legs and arms. This robot can enable the disabled person to grip an object using eye movements. A gaze tracking algorithm is employed to detect pupil movements by which the person observes the object to be gripped. By using this gaze tracking 2D information, the object is identified and the distance to the object is measured using a Kinect device installed on the robot shoulder. By using several coordinate transformations and a matching scheme, the final 3D information about the object from the base frame can be clearly identified, and the final position data is transmitted to the DSP-controlled robot controller, which enables the target object to be gripped successfully.

Gaze Detection System by Wide and Narrow View Camera (광각 및 협각 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1239-1249
    • /
    • 2003
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Previous gaze detection system uses a wide view camera, which can capture the whole face of user. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with a wide view camera and a narrow view camera. In order to detect the position of user's eye changed by facial movements, the narrow view camera has the functionalities of auto focusing and auto pan/tilt based on the detected 3D facial feature positions. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 3.1 cm of RMS error in case of Permitting facial movements and 3.57 cm in case of permitting facial and eye movement. The processing time is so short as to be implemented in real-time system(below 30 msec in Pentium -IV 1.8 GHz)

Extraction or gaze point on display based on EOG for general paralysis patient (전신마비 환자를 위한 EOG 기반 디스플레이 상의 응시 좌표 산출)

  • Lee, D.H.;Yu, J.H.;Kim, D.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.5 no.1
    • /
    • pp.87-93
    • /
    • 2011
  • This paper proposes a method for extraction of the gaze point on display using EOG(Electrooculography) signal. Based on the linear property of EOG signal, the proposed method corrects scaling difference, rotation difference and origin difference between coordinate of using EOG signal and coordinate on display, without adjustment using the head movement. The performance of the proposed method was evaluated by measuring the difference between extracted gaze point and displayed circle point on the monitor with 1680*1050 resolution. Experimental results show that the average distance errors at the gaze points are 3%(56pixel) on x-axis, 4%(47pixel) on y-axis, respectively. This method can be used to human computer interface of pointing device for general paralysis patients or HCI for VR game application.

Gaze Direction Estimation Method Using Support Vector Machines (SVMs) (Support Vector Machines을 이용한 시선 방향 추정방법)

  • Liu, Jing;Woo, Kyung-Haeng;Choi, Won-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.379-384
    • /
    • 2009
  • A human gaze detection and tracing method is importantly required for HMI(Human-Machine-Interface) like a Human-Serving robot. This paper proposed a novel three-dimension (3D) human gaze estimation method by using a face recognition, an orientation estimation and SVMs (Support Vector Machines). 2,400 images with the pan orientation range of $-90^{\circ}{\sim}90^{\circ}$ and tilt range of $-40^{\circ}{\sim}70^{\circ}$ with intervals unit of $10^{\circ}$ were used. A stereo camera was used to obtain the global coordinate of the center point between eyes and Gabor filter banks of horizontal and vertical orientation with 4 scales were used to extract the facial features. The experiment result shows that the error rate of proposed method is much improved than Liddell's.