• Title/Summary/Keyword: eye-position tracking

Search Result 52, Processing Time 0.028 seconds

Automatic Depth-of-Field Control for Stereoscopic Visualization (입체영상 가시화를 위한 자동 피사계 심도 조절기법)

  • Kang, Dong-Soo;Kim, Yang-Wook;Park, Jun;Shin, Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.4
    • /
    • pp.502-511
    • /
    • 2009
  • In order to simulate a depth-of-field effect in real world, there have been several researches in computer graphics field. It can represent an out-of-focused scene by calculating focal plane. When a point in a 3D coordinate lies on further or nearer than focal plane, the point is presented as a blurred circle on image plane according to the characteristic of the aperture and the lens. We can generate a realistic image by simulating the effect because it provides an out-of-focused scene like human eye dose. In this paper, we propose a method to calculate a disparity value of a viewer using a customized stereoscopic eye-tracking system and a GPU-based depth-of-field control method. They enable us to generate more realistic images reducing side effects such as dizziness. Since stereoscopic imaging system compels the users to fix their focal position, they usually feel discomfort during watching the stereoscopic images. The proposed method can reduce the side effect of stereoscopic display system and generate more immersive images.

  • PDF

Eye-Tracking 3D Display System Using Variable Parallax Barrier and DDC/CI (가변형 패럴랙스 배리어와 DDC 통신을 이용한 시점추적형 3D 디스플레이 시스템)

  • Che, Ho-Byoung;Yoo, Young-Rok;Kim, Jin-Soo;Lee, Sang-Hun;Lee, Seung-Hyun
    • Korean Journal of Optics and Photonics
    • /
    • v.20 no.2
    • /
    • pp.102-109
    • /
    • 2009
  • In this paper, we introduce an eye-tracking 3D display system using variable parallax barrier and DDC communication. A barrier of variable parallax barrier composed of 4 sub-barriers and a commercially available web camera is utilized to implement the eye-tracking system. The coordinates of a viewer is extracted from the web camera transfer to 3D display via DDD/CI communication. The variable barrier attached to the LCD moves electrically according to the right eye position for 3D images. This system is compared experimentally with the commercial parallax barrier methods.

A Study on the Moving Iris Tracking and the Screen Cursor Controlling (홍채의 이동추적과 화면커서 제어에 관한 연구)

  • Chai, Duck-Hyun;Lee, Seung-Yong;Lee, Young-Woo;Ryu, Kwang-Ryol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.332-335
    • /
    • 2005
  • A Study on the moving iris tracking and the screen cursor controlling is presented in this paper. The screen cursor is moved by center position of iris moving to extent of eye. The experimental result shows that the moving of iris and screen cursor are accord with distance and size of screen for the optimal tolerance is reduced the tracking error.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

A Study on Real-time Tracking Method of Horizontal Face Position for Optimal 3D T-DMB Content Service (지상파 DMB 단말에서의 3D 컨텐츠 최적 서비스를 위한 경계 정보 기반 실시간 얼굴 수평 위치 추적 방법에 관한 연구)

  • Kang, Seong-Goo;Lee, Sang-Seop;Yi, June-Ho;Kim, Jung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.88-95
    • /
    • 2011
  • An embedded mobile device mostly has lower computation power than a general purpose computer because of its relatively lower system specifications. Consequently, conventional face tracking and face detection methods, requiring complex algorithms for higher recognition rates, are unsuitable in a mobile environment aiming for real time detection. On the other hand, by applying a real-time tracking and detecting algorithm, we would be able to provide a two-way interactive multimedia service between an user and a mobile device thus providing a far better quality of service in comparison to a one-way service. Therefore it is necessary to develop a real-time face and eye tracking technique optimized to a mobile environment. For this reason, in this paper, we proposes a method of tracking horizontal face position of a user on a T-DMB device for enhancing the quality of 3D DMB content. The proposed method uses the orientation of edges to estimate the left and right boundary of the face, and by the color edge information, the horizontal position and size of face is determined finally to decide the horizontal face. The sobel gradient vector is projected vertically and candidates of face boundaries are selected, and we proposed a smoothing method and a peak-detection method for the precise decision. Because general face detection algorithms use multi-scale feature vectors, the detection time is too long on a mobile environment. However the proposed algorithm which uses the single-scale detection method can detect the face more faster than conventional face detection methods.

Characteristics of Visual Attention for the Different Type of Material Finishing in Cafe Space Using by Eye-tracking (시선추적을 이용한 카페 공간 마감재 차이의 시각주의력 특성)

  • Choi, Jin-Kyung;Kim, Ju-Yeon
    • Korean Institute of Interior Design Journal
    • /
    • v.27 no.2
    • /
    • pp.3-11
    • /
    • 2018
  • This study aims to investigate whether there is intensionally changing eye - gaze on the cafe space images with floor finishing materials. In the Yarbus' experiment, he argued that changing information that an observer is asked to obtain from an image changes pattern of eye movements. Based on the scan path evidence, this research have questions as (1) the difference of visual attention on finishing floor material stimulus, (2) visual attention of initial activity time and type of movement paths on AOIs, and (3) visual relation floor area with another AOIs. Eye movements were recorded with the SMI REDn Scientific, which sampled eye position at 30Hz and lasted 2 minutes(120s). Although viewing was binocular, only the right eye was tracked. Of the 66 observers(mean age 22 years, standard deviation: ${\pm}1.82$) who participated in the experiment done by the four point calibration and validation procedures at the beginning tasks. Analyzing qualitative data from the number of fixation and duration on AOIs divided into four parts (AOI I-Floor, AOI II-Wall, AOI III-Ceiling, and AOI IV-Counter) in the stimulus. The results from this experiment analyzed as follows. First, it was significant in the difference of the average number of AOIs fixation times observed for the spatial image using the wood tile flooring material and the polishing tile. The wood tile flooring of stimulus had higher fixation number on AOI-II, AOI-III, and AOI-IV than the polishing tile. On seeing AOI-I was higher attention in the polishing tile stimulus. Second, the observers examined AOI-II intensively in both stimuli. However, the visual intensity was also followed by on the AOI-IV and AOI-I in the wood tile flooring stimulus, and on AOI-I, AO-IV in the polishing tile. Third, visual attention data on each AOIs have divided into the time range of "5 sec" for both images. In the wood tile stimulus, the horizontal movement path followed by AOI-II, AOI-IV, and AOI-II. In the polished tile stimulus, the movement path followed by moving vertically to AOI-II, AOI-I, and AOI-II. This study approached meaningfully and found out the characteristics of visual attention, according to the different intentions of visual attention, the relationship pathways of visual mechanism appeared and also activated by eye-tracking experiments.

A New Ergonomic Interface System for the Disabled Person (장애인을 위한 새로운 감성 인터페이스 연구)

  • Heo, Hwan;Lee, Ji-Woo;Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.229-235
    • /
    • 2011
  • Objective: Making a new ergonomic interface system based on camera vision system, which helps the handicapped in home environment. Background: Enabling the handicapped to manipulate the consumer electronics by the proposed interface system. Method: A wearable device for capturing the eye image using a near-infrared(NIR) camera and illuminators is proposed for tracking eye gaze position(Heo et al., 2011). A frontal viewing camera is attached to the wearable device, which can recognize the consumer electronics to be controlled(Heo et al., 2011). And the amount of user's eye fatigue can be measured based on eye blink rate, and in case that the user's fatigue exceeds in the predetermined level, the proposed system can automatically change the mode of gaze based interface into that of manual selection. Results: The experimental results showed that the gaze estimation error of the proposed method was 1.98 degrees with the successful recognition of the object by the frontal viewing camera(Heo et al., 2011). Conclusion: We made a new ergonomic interface system based on gaze tracking and object recognition Application: The proposed system can be used for helping the handicapped in home environment.

Dynamic tracking control of robot manipulators using vision system (비전 시스템을 이용한 로봇 머니퓰레이터의 동력학 추적 제어)

  • 한웅기;국태용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1816-1819
    • /
    • 1997
  • Using the vision system, robotic tasks in unstructured environments can be accompished, which reduces greatly the cost and steup time for the robotic system to fit to he well-defined and structured working environments. This paper proposes a dynamic control scheme for robot manipulator with eye-in-hand camera configuration. To perfom the tasks defined in the image plane, the camera motion Jacobian (image Jacobian) matrix is used to transform the camera motion to the objection position change. In addition, the dynamic learning controller is designed to improve the tracking performance of robotic system. the proposed control scheme is implemented for tasks of tracking moving objects and shown to outperform the conventional visual servo system in convergence and robustness to parameter uncertainty, disturbances, low sampling rate, etc.

  • PDF

Human Spatial Cognition Using Visual and Auditory Stimulation

  • Yu, Mi;Piao, Yong-Jun;Kim, Yong-Yook;Kwon, Tae-Kyu;Hong, Chul-Un;Kim, Nam-Gyun
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.2
    • /
    • pp.41-45
    • /
    • 2006
  • This paper deals with human spatial cognition using visual and auditory stimulation. More specially, this investigation is to observe the relationship between the head and the eye motor system for the localization of visual target direction in space and to try to describe what is the role of right-side versus left-side pinna. In the experiment of visual stimulation, nineteen red LEDs (Luminescent Diodes, Brightness: $210\;cd/^2$) arrayed in the horizontal plane of the surrounding panel are used. Here the LEDs are located 10 degrees apart from each other. Physiological parameters such as EOG (Electro-Oculography), head movement, and their synergic control are measured by BIOPAC system and 3SPACE FASTRAK. In the experiment of auditory stimulation, one side of the pinna function was distorted intentionally by inserting a short tube in the ear canal. The localization error caused by right and left side pinna distortion was investigated as well. Since a laser pointer showed much less error (0.5%) in localizing target position than FASTRAK (30%) that has been generally used, a laser pointer was used for the pointing task. It was found that harmonic components were not essential for auditory target localization. However, non-harmonic nearby frequency components was found to be more important in localizing the target direction of sound. We have found that the right pinna carries out one of the most important functions in localizing target direction and pure tone with only one frequency component is confusing to be localized. It was also found that the latency time is shorter in self moved tracking (SMT) than eye alone tracking (EAT) and eye hand tracking (EHT). These results can be used in further study on the characterization of human spatial cognition.