• 제목/요약/키워드: pupil detection.

Search Result 80, Processing Time 0.028 seconds

Concealed information test using ERPs and pupillary responses (ERP와 동공 반응을 이용한 숨긴정보검사)

  • Eom, Jin-Sup;Park, Kwang-Bai;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.259-268
    • /
    • 2012
  • In a P300-based concealed information test (P300 CIT), the result of the test is greatly affected by the value of the probe stimulus. With a probe stimulus of low value, the detection rate decreases. The aim of this study was to determine whether the pupil-based concealed information test (Pupil CIT) could be used in addition to the P300 CIT for the probes of low value. Participants were told to choose one card from a deck of five cards (space 2, 3, 4, 5, 6), Then a P300 CIT and a Pupil CIT for the selected card were administered. P300s were measured at 3 scalp sites (Fz, Cz, and Pz), and the pupil sizes of left and right eyes were recorded. The P300 amplitude measured at Fz, Cz, and Pz was significantly different between the probe and irrelevant stimuli. And, in the Pupil CIT, the pupil size was also different between the two stimuli for both eyes. The detection rates of the P300 CIT were 44% at Fz and Cz sites and 36% at Pz site. And the detection rates of the Pupil CIT were 52% for the left eye and 60% for the right eye. There is a trend that the detection rate of the Pupil CIT was higher than that of the P300 CIT, but the difference didn't reach significance partly because of the relatively small sample size. The correlation between the decision based on the P300 CIT and that based on the Pupil CIT was not significant. As a conclusion, it is recommended to use a Pupil CIT instead of a P300 CIT when the value of the probe is low. And a combination of the measures may be superior to either one of them in detection rate.

  • PDF

Detection of Pupil Center using Projection Function and Hough Transform (프로젝션 함수와 허프 변환을 이용한 눈동자 중심점 찾기)

  • Choi, Yeon-Seok;Mun, Won-Ho;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.167-170
    • /
    • 2010
  • In this paper, we proposed a novel algorithm to detect the center of pupil in frontal view face. This algorithm, at first, extract an eye region from the face image using integral projection function and variance projection function. In an eye region, detect the center of pupil positions using circular hough transform with sobel edge mask. The experimental results show good performance in detecting pupil center from FERET face image.

  • PDF

Design and Implementation of Eye-Gaze Estimation Algorithm based on Extraction of Eye Contour and Pupil Region (눈 윤곽선과 눈동자 영역 추출 기반 시선 추정 알고리즘의 설계 및 구현)

  • Yum, Hyosub;Hong, Min;Choi, Yoo-Joo
    • The Journal of Korean Association of Computer Education
    • /
    • v.17 no.2
    • /
    • pp.107-113
    • /
    • 2014
  • In this study, we design and implement an eye-gaze estimation system based on the extraction of eye contour and pupil region. In order to effectively extract the contour of the eye and region of pupil, the face candidate regions were extracted first. For the detection of face, YCbCr value range for normal Asian face color was defined by the pre-study of the Asian face images. The biggest skin color region was defined as a face candidate region and the eye regions were extracted by applying the contour and color feature analysis method to the upper 50% region of the face candidate region. The detected eye region was divided into three segments and the pupil pixels in each pupil segment were counted. The eye-gaze was determined into one of three directions, that is, left, center, and right, by the number of pupil pixels in three segments. In the experiments using 5,616 images of 20 test subjects, the eye-gaze was estimated with about 91 percent accuracy.

  • PDF

Visual Modeling and Content-based Processing for Video Data Storage and Delivery

  • Hwang Jae-Jeong;Cho Sang-Gyu
    • Journal of information and communication convergence engineering
    • /
    • v.3 no.1
    • /
    • pp.56-61
    • /
    • 2005
  • In this paper, we present a video rate control scheme for storage and delivery in which the time-varying viewing interests are controlled by human gaze. To track the gaze, the pupil's movement is detected using the three-step process : detecting face region, eye region, and pupil point. To control bit rates, the quantization parameter (QP) is changed by considering the static parameters, the video object priority derived from the pupil tracking, the target PSNR, and the weighted distortion value of the coder. As results, we achieved human interfaced visual model and corresponding region-of-interest rate control system.

A STUDY ON PUPIL DETECTION AND TRACKING METHODS BASED ON IMAGE DATA ANALYSIS

  • CHOI, HANA;GIM, MINJUNG;YOON, SANGWON
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.327-336
    • /
    • 2021
  • In this paper, we will introduce the image processing methods for the remote pupillary light reflex measurement using the video taken by a general smartphone camera without a special device such as an infrared camera. We propose an algorithm for estimate the size of the pupil that changes with light using image data analysis without a learning process. In addition, we will introduce the results of visualizing the change in the pupil size by removing noise from the recorded data of the pupil size measured for each frame of the video. We expect that this study will contribute to the construction of an objective indicator for remote pupillary light reflex measurement in the situation where non-face-to-face communication has become common due to COVID-19 and the demand for remote diagnosis is increasing.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Webcam-Based 2D Eye Gaze Estimation System By Means of Binary Deformable Eyeball Templates

  • Kim, Jin-Woo
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.5
    • /
    • pp.575-580
    • /
    • 2010
  • Eye gaze as a form of input was primarily developed for users who are unable to use usual interaction devices such as keyboard and the mouse; however, with the increasing accuracy in eye gaze detection with decreasing cost of development, it tends to be a practical interaction method for able-bodied users in soon future as well. This paper explores a low-cost, robust, rotation and illumination independent eye gaze system for gaze enhanced user interfaces. We introduce two brand-new algorithms for fast and sub-pixel precise pupil center detection and 2D Eye Gaze estimation by means of deformable template matching methodology. In this paper, we propose a new algorithm based on the deformable angular integral search algorithm based on minimum intensity value to localize eyeball (iris outer boundary) in gray scale eye region images. Basically, it finds the center of the pupil in order to use it in our second proposed algorithm which is about 2D eye gaze tracking. First, we detect the eye regions by means of Intel OpenCV AdaBoost Haar cascade classifiers and assign the approximate size of eyeball depending on the eye region size. Secondly, using DAISMI (Deformable Angular Integral Search by Minimum Intensity) algorithm, pupil center is detected. Then, by using the percentage of black pixels over eyeball circle area, we convert the image into binary (Black and white color) for being used in the next part: DTBGE (Deformable Template based 2D Gaze Estimation) algorithm. Finally, using DTBGE algorithm, initial pupil center coordinates are assigned and DTBGE creates new pupil center coordinates and estimates the final gaze directions and eyeball size. We have performed extensive experiments and achieved very encouraging results. Finally, we discuss the effectiveness of the proposed method through several experimental results.

The Difference in Pupil Size Responding to Cognitive Load and Emotional Arousal Questions between Guilty and Innocent Groups (유죄 및 무죄 집단 간 인지적 부하 및 정서적 각성 질문에 따른 동공크기의 변화의 차이)

  • Cho, Ara;Kim, Kiho;Lee, Jang-Han
    • Korean Journal of Forensic Psychology
    • /
    • v.11 no.2
    • /
    • pp.155-171
    • /
    • 2020
  • The purpose of this study is to examine the effects of emotional arousal and cognitive load on pupil diameter during a lie detection interview. The guilty group (n = 30) committed a mock crime (i.e., stealing cash) and the innocent group (n = 30) performed a mission (i.e., sending a message) in the research assistant's office. After that, their pupil size was measured using a wearable eye-tracker during the interview. The interview questions were classified with the three cognitive load, three emotional arousal, and three neutral questions. The results indicate that the main effects of group and time were not significant, but the interaction between group and time was significant. It means that when answering cognitive load questions, the guilty group showed larger increase in pupil diameter than the innocent group. The present study suggests that inducing cognitive load is more effective than inducing emotional arousal during an interview when using pupil diameter as an index of deception, and it is expected to improve the accuracy of lie detection.

  • PDF

Real-Time Multiple Face Detection Using Active illumination (능동적 조명을 이용한 실시간 복합 얼굴 검출)

  • 한준희;심재창;설증보;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.155-160
    • /
    • 2003
  • This paper presents a multiple face detector based on a robust pupil detection technique. The pupil detector uses active illumination that exploits the retro-reflectivity property of eyes to facilitate detection. The detection range of this method is appropriate for interactive desktop and kiosk applications. Once the location of the pupil candidates are computed, the candidates are filtered and grouped into pairs that correspond to faces using heuristic rules. To demonstrate the robustness of the face detection technique, a dual mode face tracker was developed, which is initialized with the most salient detected face. Recursive estimators are used to guarantee the stability of the process and combine the measurements from the multi-face detector and a feature correlation tracker. The estimated position of the face is used to control a pan-tilt servo mechanism in real-time, that moves the camera to keep the tracked face always centered in the image.

  • PDF

Development of Tracking Equipment for Real­Time Multiple Face Detection (실시간 복합 얼굴 검출을 위한 추적 장치 개발)

  • 나상동;송선희;나하선;김천석;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.8
    • /
    • pp.1823-1830
    • /
    • 2003
  • This paper presents a multiple face detector based on a robust pupil detection technique. The pupil detector uses active illumination that exploits the retro­reflectivity property of eyes to facilitate detection. The detection range of this method is appropriate for interactive desktop and kiosk applications. Once the location of the pupil candidates are computed, the candidates are filtered and grouped into pairs that correspond to faces using heuristic rules. To demonstrate the robustness of the face detection technique, a dual mode face tracker was developed, which is initialized with the most salient detected face. Recursive estimators are used to guarantee the stability of the process and combine the measurements from the multi­face detector and a feature correlation tracker. The estimated position of the face is used to control a pan­tilt servo mechanism in real­time, that moves the camera to keep the tracked face always centered in the image.