• Title/Summary/Keyword: Real-Time Eye Tracking

Search Result 83, Processing Time 0.034 seconds

Real Time Gaze Discrimination for Human Computer Interaction (휴먼 컴퓨터 인터페이스를 위한 실시간 시선 식별)

  • Park Ho sik;Bae Cheol soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.125-132
    • /
    • 2005
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Futhermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about 5°horizontally and 8°vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.

Real Time Gaze Discrimination for Computer Interface (컴퓨터 인터페이스를 위한 실시간 시선 식별)

  • Hwang, Suen-Ki;Kim, Moon-Hwan
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.38-46
    • /
    • 2010
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10% improvement in classification error. The angular gaze accuracy is about $5^{\circ}$horizontally and $8^{\circ}$vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.

  • PDF

A Study on Real Time Gaze Discrimination System using GRNN (GRNN을 이용한 실시간 시선 식별 시스템에 관한 연구)

  • Lee Young-Sik;Bae Cheol-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.2
    • /
    • pp.322-329
    • /
    • 2005
  • This paper describes a computer vision system based on active IR illumination for real-time gaze discrimination system. Unlike most of the existing gaze discrimination techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze discrimination system can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNS). With GRNNS, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a reclassification scheme that deals with the classes that tend to be misclassified. This leads to a 10$\%$ improvement in classification error. The angular gaze accuracy is about $5^{circ}$horizontally and $8^{circ}$vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.

A Study on Manipulating Method of 3D Game in HMD Environment by using Eye Tracking (HMD(Head Mounted Display)에서 시선 추적을 통한 3차원 게임 조작 방법 연구)

  • Park, Kang-Ryoung;Lee, Eui-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.49-64
    • /
    • 2008
  • Recently, many researches about making more comfortable input device based on gaze detection technology have been done in human computer interface. However, the system cost becomes high due to the complicated hardware and there is difficulty to use the gaze detection system due to the complicated user calibration procedure. In this paper, we propose a new gaze detection method based on the 2D analysis and a simple user calibration. Our method used a small USB (Universal Serial Bus) camera attached on a HMD (Head-Mounted Display), hot-mirror and IR (Infra-Red) light illuminator. Because the HMD is moved according to user's facial movement, we can implement the gaze detection system of which performance is not affected by facial movement. In addition, we apply our gaze detection system to 3D first person shooting game. From that, the gaze direction of game character is controlled by our gaze detection method and it can target the enemy character and shoot, which can increase the immersion and interest of game. Experimental results showed that the game and gaze detection system could be operated at real-time speed in one desktop computer and we could obtain the gaze detection accuracy of 0.88 degrees. In addition, we could know our gaze detection technology could replace the conventional mouse in the 3D first person shooting game.

A Study on Controlling IPTV Interface Based on Tracking of Face and Eye Positions (얼굴 및 눈 위치 추적을 통한 IPTV 화면 인터페이스 제어에 관한 연구)

  • Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung;Lee, Hee-Kyung;Park, Min-Sik;Lee, Han-Kyu;Hong, Jin-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.930-939
    • /
    • 2010
  • Recently, many researches for making more comfortable input device based on gaze detection have been vigorously performed in human computer interaction. However, these previous researches are difficult to be used in IPTV environment because these methods need additional wearing devices or do not work at a distance. To overcome these problems, we propose a new way of controlling IPTV interface by using a detected face and eye positions in single static camera. And although face or eyes are not detected successfully by using Adaboost algorithm, we can control IPTV interface by using motion vectors calculated by pyramidal KLT (Kanade-Lucas-Tomasi) feature tracker. These are two novelties of our research compared to previous works. This research has following advantages. Different from previous research, the proposed method can be used at a distance about 2m. Since the proposed method does not require a user to wear additional equipments, there is no limitation of face movement and it has high convenience. Experimental results showed that the proposed method could be operated at real-time speed of 15 frames per second. Wd confirmed that the previous input device could be sufficiently replaced by the proposed method.

The complexity of opt-in procedures in mobile shopping: Moderating effects of visual attention using the eyetracker (모바일 쇼핑에서 옵트인의 절차적 복잡성 연구: 아이트래커(eyetracker) 기반 시각적 주의의 조절효과)

  • Kim, Sang-Hu;Kim, Yerang;Yang, Byunghwa
    • Journal of Digital Convergence
    • /
    • v.15 no.8
    • /
    • pp.127-135
    • /
    • 2017
  • Consumers tend to feel concern about disclosure of personal information and, at the same time, to avoid inconvenience of procedural complexity caused by the privacy protections. The purpose of current paper is to investigate relationships between opt-in procedural complexity and shopping behavior using smart phones, moderating by the amount of visual attentions using eyetrackers. Therefore, we created a virtual mobile Web-site in which the complexity of opt-in procedures in our experiment is manipulated and measured. Also, we measured the dwell-time of area of interest using SMI-RED 250 instrument for tracking the real eye movement. Results indicated that the levels of procedural complexity are related to repurchase, indicating a moderating effect of the amount of visual attentions. Finally, we discussed several theoretical and practical implications of management for mobile commerce.

Technical-note : Real-time Evaluation System for Quantitative Dynamic Fitting during Pedaling (단신 : 페달링 시 정량적인 동적 피팅을 위한 실시간 평가 시스템)

  • Lee, Joo-Hack;Kang, Dong-Won;Bae, Jae-Hyuk;Shin, Yoon-Ho;Choi, Jin-Seung;Tack, Gye-Rae
    • Korean Journal of Applied Biomechanics
    • /
    • v.24 no.2
    • /
    • pp.181-187
    • /
    • 2014
  • In this study, a real-time evaluation system for quantitative dynamic fitting during pedaling was developed. The system is consisted of LED markers, a digital camera connected to a computer and a marker detecting program. LED markers are attached to hip, knee, ankle joint and fifth metatarsal in the sagittal plane. Playstation3 eye which is selected as a main digital camera in this paper has many merits for using motion capture, such as high FPS (Frame per second) about 180FPS, $320{\times}240$ resolution, and low-cost with easy to use. The maker detecting program was made by using Labview2010 with Vision builder. The program was made up of three parts, image acquisition & processing, marker detection & joint angle calculation, and output section. The digital camera's image was acquired in 95FPS, and the program was set-up to measure the lower-joint angle in real-time, providing the user as a graph, and allowing to save it as a test file. The system was verified by pedalling at three saddle heights (knee angle: 25, 35, $45^{\circ}$) and three cadences (30, 60, 90 rpm) at each saddle heights by using Holmes method, a method of measuring lower limbs angle, to determine the saddle height. The result has shown low average error and strong correlation of the system, respectively, $1.18{\pm}0.44^{\circ}$, $0.99{\pm}0.01^{\circ}$. There was little error due to the changes in the saddle height but absolute error occurred by cadence. Considering the average error is approximately $1^{\circ}$, it is a suitable system for quantitative dynamic fitting evaluation. It is necessary to decrease error by using two digital camera with frontal and sagittal plane in future study.

Wavelet Transform-based Face Detection for Real-time Applications (실시간 응용을 위한 웨이블릿 변환 기반의 얼굴 검출)

  • 송해진;고병철;변혜란
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.9
    • /
    • pp.829-842
    • /
    • 2003
  • In this Paper, we propose the new face detection and tracking method based on template matching for real-time applications such as, teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Since the main purpose of paper is to track a face regardless of various environments, we use template-based face tracking method. To generate robust face templates, we apply wavelet transform to the average face image and extract three types of wavelet template from transformed low-resolution average face. However template matching is generally sensitive to the change of illumination conditions, we apply Min-max normalization with histogram equalization according to the variation of intensity. Tracking method is also applied to reduce the computation time and predict precise face candidate region. Finally, facial components are also detected and from the relative distance of two eyes, we estimate the size of facial ellipse.

Gaze Tracking System Using Feature Points of Pupil and Glints Center (동공과 글린트의 특징점 관계를 이용한 시선 추적 시스템)

  • Park Jin-Woo;Kwon Yong-Moo;Sohn Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.80-90
    • /
    • 2006
  • A simple 2D gaze tracking method using single camera and Purkinje image is proposed. This method employs single camera with infrared filter to capture one eye and two infrared light sources to make reflection points for estimating corresponding gaze point on the screen from user's eyes. Single camera, infrared light sources and user's head can be slightly moved. Thus, it renders simple and flexible system without using any inconvenient fixed equipments or assuming fixed head. The system also includes a simple and accurate personal calibration procedure. Before using the system, each user only has to stare at two target points for a few seconds so that the system can initiate user's individual factors of estimating algorithm. The proposed system has been developed to work in real-time providing over 10 frames per second with XGA $(1024{\times}768)$ resolution. The test results of nine objects of three subjects show that the system is achieving an average estimation error less than I degree.

USE OF A COMPUTER NAVIGATION SYSTEM FOR OSTEOTOMIES IN THE ORTHOGNATHIC SURGERY: TECHNICAL NOTE (악교정수술 골절단술시 컴퓨터 네비게이션 시스템의 이용: Technical Note)

  • Kim, Moon-Key;Kang, Sang-Hoon;Choi, Young-Su;Kim, Jung-In;Byun, In-Young;Park, Won-Se;Lee, Sang-Hwy
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.32 no.3
    • /
    • pp.282-288
    • /
    • 2010
  • Surgery with the computer navigation system can make it possible to identify important anatomical structures which are difficult to be confirmed with the naked eye in the operation, and has extended their applications in various surgical fields. The head and neck surgery especially requires detailed anatomical knowledges and these knowledges have influences on postoperative functions and esthetics of a patient. In the orthognathic surgery, we should take osteotomies in the precise locations of the jawbones and move segments to the intended positions. There are so many important anatomical structures around the osteotomy-sites in the orthognathic surgery that the prevention of damage to these structures to obtain satisfactory results without any complication. There are vessels of the pterygoid plexus posterior to the pterygoid plate in the maxilla and the mandibular nerve enters the mandibluar foramen in the mandibular ramus. These locations should be confirmed perioperatively to avoid any injury to these structures. The navigation-assisted surgery may be helpful for this purpose. We performed navigational orthognathic surgeries with preoperative CT images and obtained satisfactory results. The osteotomy was performed in the proper location and damaging the surrounding important anatomical structures was avoided by keeping the saw away from them with the real-time navigation. It may be required to develop proper devices and protocols for the navigation-assisted orthognathic surgery.