• Title/Summary/Keyword: eye camera

Search Result 309, Processing Time 0.028 seconds

Geometric Correction of Vehicle Fish-eye Lens Images (차량용 어안렌즈영상의 기하학적 왜곡 보정)

  • Kim, Sung-Hee;Cho, Young-Ju;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.601-605
    • /
    • 2009
  • Due to the fact that fish-eye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. However, vehicle fish-eye cameras have diagonal output images rather than circular images and have asymmetric distortion beyond the horizontal angle. In this paper, we introduce a camera model and metric calibration method for vehicle cameras which uses feature points of the image. And undistort the input image through a perspective projection, where straight lines should appear straight. The method fitted vehicle fish-eye lens with different field of views.

  • PDF

Compensation for Fast Mead Movements on Non-intrusive Eye Gaze Tracking System Using Kalman Filter (Kalman 필터를 이용한 비접촉식 응시점 추정 시스템에서의 빠른 머리 이동의 보정)

  • Kim, Soo-Chan;Yoo, Jae-Ha;Nam, Ki-Chang;Kim, Deok-Won
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.33-35
    • /
    • 2005
  • We propose an eye gaze tracking system under natural head movements. The system consists of one CCD camera and two front-surface mirrors. The mirrors rotate to follow head movements in order to keep the eye within the view of the camera. However, the mirror controller cannot guarantee the fast head movements, because the frame rate is generally 30Hz. To overcome this problem, we applied Kalman predictor to estimate next eye position from the current eye image. In the results, our system allows the subjects head to move 50cm horizontally and 40cm vertically, with the speed about 10cm/sec and 6cm/sec, respectively. And spatial gaze resolutions are about 4.5 degree and 4.5 degree, respectively, and the gaze estimation accuracy is 92% under natural head movements.

  • PDF

A Study on an Infrared Illumination Stabilization Method in a Head Mounted Eye Tracking System for Sport Applications (착용형 시선 추적 장치의 스포츠 분야 적용을 위한 적외선 조명 변화 최소화에 관한 연구)

  • Lee, Sang-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.3
    • /
    • pp.265-272
    • /
    • 2009
  • In this paper, a simple optical method that uses an infrared(IR) cut filter is proposed to minimize variation of eye image by external infrared(IR) sources in a video based head mounted eye tracking system that is used in the field of sports. For this, the IR cut filter is attached to a head mount of the eye tracking system, and the camera with an IR LED is located between the IR cut filter and eye. In this structure, external IR is blocked by the IR cut filter, and the IR intensity on the eye can be controlled by the IR LED. Therefore, the illumination condition of the camera to capture the eye can be stable without being affected by external IR illuminations. To verify the proposed idea, variation of the eye image and intensity of the IR with/without the IR cut filter is measured under various illumination conditions. The measured data show that the IR cut filter method can block external IR effectively, and complex pupil detection algorithms can be replaced by a simple binarized method.

Eye Contact System Using Depth Fusion for Immersive Videoconferencing (실감형 화상 회의를 위해 깊이정보 혼합을 사용한 시선 맞춤 시스템)

  • Jang, Woo-Seok;Lee, Mi Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.7
    • /
    • pp.93-99
    • /
    • 2015
  • In this paper, we propose a gaze correction method for realistic video teleconferencing. Typically, cameras used in teleconferencing are installed at the side of the display monitor, but not in the center of the monitor. This system makes it too difficult for users to contact each eyes. Therefore, eys contact is the most important in the immersive videoconferencing. In the proposed method, we use the stereo camera and the depth camera to correct the eye contact. The depth camera is the kinect camera, which is the relatively cheap price, and estimate the depth information efficiently. However, the kinect camera has some inherent disadvantages. Therefore, we fuse the kinect camera with stereo camera to compensate the disadvantages of the kinect camera. Consecutively, for the gaze-corrected image, view synthesis is performed by 3D warping according to the depth information. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Real-Time Eye Tracking Using IR Stereo Camera for Indoor and Outdoor Environments

  • Lim, Sungsoo;Lee, Daeho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3965-3983
    • /
    • 2017
  • We propose a novel eye tracking method that can estimate 3D world coordinates using an infrared (IR) stereo camera for indoor and outdoor environments. This method first detects dark evidences such as eyes, eyebrows and mouths by fast multi-level thresholding. Among these evidences, eye pair evidences are detected by evidential reasoning and geometrical rules. For robust accuracy, two classifiers based on multiple layer perceptron (MLP) using gradient local binary patterns (GLBPs) verify whether the detected evidences are real eye pairs or not. Finally, the 3D world coordinates of detected eyes are calculated by region-based stereo matching. Compared with other eye detection methods, the proposed method can detect the eyes of people wearing sunglasses due to the use of the IR spectrum. Especially, when people are in dark environments such as driving at nighttime, driving in an indoor carpark, or passing through a tunnel, human eyes can be robustly detected because we use active IR illuminators. In the experimental results, it is shown that the proposed method can detect eye pairs with high performance in real-time under variable illumination conditions. Therefore, the proposed method can contribute to human-computer interactions (HCIs) and intelligent transportation systems (ITSs) applications such as gaze tracking, windshield head-up display and drowsiness detection.

A New Ergonomic Interface System for the Disabled Person (장애인을 위한 새로운 감성 인터페이스 연구)

  • Heo, Hwan;Lee, Ji-Woo;Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.229-235
    • /
    • 2011
  • Objective: Making a new ergonomic interface system based on camera vision system, which helps the handicapped in home environment. Background: Enabling the handicapped to manipulate the consumer electronics by the proposed interface system. Method: A wearable device for capturing the eye image using a near-infrared(NIR) camera and illuminators is proposed for tracking eye gaze position(Heo et al., 2011). A frontal viewing camera is attached to the wearable device, which can recognize the consumer electronics to be controlled(Heo et al., 2011). And the amount of user's eye fatigue can be measured based on eye blink rate, and in case that the user's fatigue exceeds in the predetermined level, the proposed system can automatically change the mode of gaze based interface into that of manual selection. Results: The experimental results showed that the gaze estimation error of the proposed method was 1.98 degrees with the successful recognition of the object by the frontal viewing camera(Heo et al., 2011). Conclusion: We made a new ergonomic interface system based on gaze tracking and object recognition Application: The proposed system can be used for helping the handicapped in home environment.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

A New Landmark-Based Visual Servoing with Stereo Camera for Door Opening

  • Han, Myoung-Soo;Lee, Soon-Geul;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.2-100
    • /
    • 2002
  • In this paper we propose a new visual servoing method for door opening with mobile manipulator. We use an eye-to-hand system that stereo camera is mounted on mobile platform, and adopt the position-based method. The previous methods for door opening mostly used eye-in-hand system with mono camera and required predefined knowledge such as radius and position about door grip, which was mainly caused by using mono cam era. This is also a severe constraint for pursuing general-purpose algorithm for door opening. For overcoming such drawback, we use stereo camera and suggest a new method that detect the door grip and estimate its pose from stereo depth information without predefined knowledge. Al...

  • PDF

A Driver's Condition Warning System using Eye Aspect Ratio (눈 영상비를 이용한 운전자 상태 경고 시스템)

  • Shin, Moon-Chang;Lee, Won-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.2
    • /
    • pp.349-356
    • /
    • 2020
  • This paper introduces the implementation of a driver's condition warning system using eye aspect ratio to prevent a car accident. The proposed driver's condition warning system using eye aspect ratio consists of a camera, that is required to detect eyes, the Raspberrypie that processes information on eyes from the camera, buzzer and vibrator, that are required to warn the driver. In order to detect and recognize driver's eyes, the histogram of oriented gradients and face landmark estimation based on deep-learning are used. Initially the system calculates the eye aspect ratio of the driver from 6 coordinates around the eye and then gets each eye aspect ratio values when the eyes are opened and closed. These two different eye aspect ratio values are used to calculate the threshold value that is necessary to determine the eye state. Because the threshold value is adaptively determined according to the driver's eye aspect ratio, the system can use the optimal threshold value to determine the driver's condition. In addition, the system synthesizes an input image from the gray-scaled and LAB model images to operate in low lighting conditions.