• Title/Summary/Keyword: Eye Camera

Search Result 309, Processing Time 0.033 seconds

Center Position Tracking Enhancement of Eyes and Iris on the Facial Image

  • Chai Duck-hyun;Ryu Kwang-ryol
    • Journal of information and communication convergence engineering
    • /
    • v.3 no.2
    • /
    • pp.110-113
    • /
    • 2005
  • An enhancement of tracking capacity for the centering position of eye and iris on the facial image is presented. A facial image is acquisitioned with a CCD camera to be converted into a binary image. The eye region to be a specified brightness and shapes is used the FRM method using the neighboring five mask areas, and the iris on the eye is tracked with FPDP method. The experimental result shows that the proposed methods lead the centering position tracking capability to be enhanced than the pixel average coordinate values method.

Thermographic Assessment in Dry Eye Syndrome, Compared with Normal Eyes by Using Thermography (열화상카메라를 이용한 정상안과 건성안의 서모그래피 비교)

  • Park, Chang Won;Lee, Ok Jin;Lee, Seung Won
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.2
    • /
    • pp.247-253
    • /
    • 2015
  • Purpose: The purpose of this study was to compare and analyze the ocular surface and the palpebral conjunctiva of categorized subjects, which were divided into normal eye group and dry eye group, by using a thermal camera. Methods: Subjects were 144 eyes of 72 normal university students, who didn't have any corneal disease, abnormal lacrimal ducts, medical records regarding ocular surgeries, or experience of using contact lens. Subjects were divided into two groups, which were normal eye group and dry eye group, based on the results of TBUT, Schirmer I test, and McMonnies test. After categorizing the subjects, the temperature of the subjects' ocular surface and the palpebral conjunctiva were measured and analyzed by using a thermal camera (Cox CX series, Answer co., Korea). Results: In the normal eye group's Central Ar.1, Nasal Ar.2, Temporal Ar.3, Superior Ar.4, Inferior Ar.5, the measured amount of temperature change on each area was $-0.13{\pm}0.08$, $-0.14{\pm}0.08$, $-0.12{\pm}0.08$, $-0.14{\pm}0.08$, $-0.10{\pm}0.09(^{\circ}C/sec)$. The dry eye group's results were $-0.17{\pm}0.08$, $-0.16{\pm}0.07$, $-0.16{\pm}0.08$, $-0.17{\pm}0.09$, $-0.15{\pm}0.08(^{\circ}C/sec)$. When compared with the normal eye group, the values of Ar.1, Ar.3, Ar.5 were significantly different in the dry eye group(p<0.05). The amount of temperature change, which was observed on the palpebral conjunctiva(Ar.1:central, Ar.2: nasal, Ar.3: temporal) of the normal eyes, measured by thermography, was $34.36{\pm}1.12$, $34.17{\pm}1.10$, $34.07{\pm}1.12^{\circ}C$ on each area. Same values taken from the dry eye group was $33.55{\pm}0.94$, $33.43{\pm}0.97$, $33.51{\pm}1.06^{\circ}C$ on each area. The values of Ar.1, taken from the dry eye group, had a significant difference, compared to the values of the normal eye group(p=0.05). Conclusion: The temperature of the ocular surface decreased faster on the dry eyes, compared to the normal eyes. The temperature measured on the palpebral conjunctiva of the dry eyes were also lower than the normal eyes. The temperature changes on the ocular surface, observed with a thermal camera, were objective values to assess the stability of tear films, and might provide useful data for studies related to dry eye syndrome.

Image Data Loss Minimized Geometric Correction for Asymmetric Distortion Fish-eye Lens (비대칭 왜곡 어안렌즈를 위한 영상 손실 최소화 왜곡 보정 기법)

  • Cho, Young-Ju;Kim, Sung-Hee;Park, Ji-Young;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.23-31
    • /
    • 2010
  • Due to the fact that fisheye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Not only use the camera as a viewing system, but also as a camera sensor, camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. In this thesis, we introduce a geometric correction technique to minimize the loss of the image data from a vehicle fish-eye lens having a field of view over $180^{\circ}$, and a asymmetric distortion. Geometric correction is a process in which a camera model with a distortion model is established, and then a corrected view is generated after camera parameters are calculated through a calibration process. First, the FOV model to imitate a asymmetric distortion configuration is used as the distortion model. Then, we need to unify the axis ratio because a horizontal view of the vehicle fish-eye lens is asymmetrically wide for the driver, and estimate the parameters by applying a non-linear optimization algorithm. Finally, we create a corrected view by a backward mapping, and provide a function to optimize the ratio for the horizontal and vertical axes. This minimizes image data loss and improves the visual perception when the input image is undistorted through a perspective projection.

A Head-Eye Calibration Technique Using Image Rectification (영상 교정을 이용한 헤드-아이 보정 기법)

  • Kim, Nak-Hyun;Kim, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.37 no.8
    • /
    • pp.11-23
    • /
    • 2000
  • Head-eye calibration is a process for estimating the unknown orientation and position of a camera with respect to a mobile platform, such as a robot wrist. We present a new head-eye calibration technique which can be applied for platforms with rather limited motion capability In particular, the proposed calibration technique can be applied to find the relative orientation of a camera mounted on a linear translation platform which does not have rotation capability. The algorithm find the rotation using a calibration data obtained from pure Translation of a camera along two different axes We have derived a calibration algorithm exploiting the rectification technique in such a way that the rectified images should satisfy the epipolar constraint. We present the calibration procedure for both the rotation and the translation components of a camera relative to the platform coordinates. The efficacy of the algorithm is demonstrated through simulations and real experiments.

  • PDF

Development of a Fall Detection System Using Fish-eye Lens Camera (어안 렌즈 카메라 영상을 이용한 기절동작 인식)

  • So, In-Mi;Han, Dae-Kyung;Kang, Sun-Kyung;Kim, Young-Un;Jong, Sung-tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.97-103
    • /
    • 2008
  • This study is to present a fainting motion recognizing method by using fish-eye lens images to sense emergency situations. The camera with fish-eye lens located at the center of the ceiling of the living room sends images, and then the foreground pixels are extracted by means of the adaptive background modeling method based on the Gaussian complex model, which is followed by tracing of outer points in the foreground pixel area and the elliptical mapping. During the elliptical tracing, the fish-eye lens images are converted to fluoroscope images. the size and location changes, and moving speed information are extracted to judge whether the movement, pause, and motion are similar to fainting motion. The results show that compared to using fish-eye lens image, extraction of the size and location changes. and moving speed by means of the conversed fluoroscope images has good recognition rates.

  • PDF

Experimental Studies on Eye Injury Risks by Different BB Pellet Materials (BB Pellet 재질에 따른 안구 손상 위험성에 관한 실험적 연구)

  • Kim, Hyung-Suk;Park, Dal-Jae
    • Journal of the Korean Society of Safety
    • /
    • v.27 no.2
    • /
    • pp.20-24
    • /
    • 2012
  • Experimental studies were performed to investigate the eye injury risks by different BB pellet materials. Four different BB pellet materials were used: plastic (P), silicon (S), rubber (R) and plastic covered with silicon (SR). The BB pellet images penetrating into the gelatine simulant were recorded by a high-speed video camera. The results obtained from the different pellet materials were discussed in terms of impact velocity and penetration depth; threshold velocity and projectile sectional density; eye injury risks by normalized energies. It was found that the P pellets caused higher impact velocity while the lower was SR pellets. The penetration depth and threshold velocity of the pellets were dependent on the impact velocity of the pellets, and the P pellets resulted in the higher eye injury risk while the lower was SP.

Saccadic Movement as a Performance Measure of Vigilance Task (경계작업 척도로서의 안구운동 특성)

  • Lee, Myeon-U;Lee, Gwan-Haeng;Jo, Yeong-Jin
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.8 no.1
    • /
    • pp.13-21
    • /
    • 1982
  • Experiments on the eye movement behavior were performed using Vidicon Eye Camera. Factorial design ( $3{\times}3$) was used to evaluate the validity of the eye movement as a performance measure in vigilance task. Eye movement data were recorded in video tapes, then the data were converted to digital signals, which were reduced to quantitative fixation and saccadic movement data by a microcomputer. To compare with existing vigilance performance measures, response time and the number of false alarms were also recorded. The results showed that the saccadic movement is a good measure of the performance in vigilance task : 1. Both the response time and the saccadic movement increased significantly during the initial two time blocks. 2. High correlations were shown between the response time and the saccadic movement. 3. The locational uncertainty affects the saccadic movement, the number of fixations, the response time but doesn't affect the duration of eye fixations.

  • PDF

Elimination of the Red-Eye Area using Skin Color Information

  • Kim, Kwang-Baek;Song, Doo-Heon
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.2
    • /
    • pp.131-134
    • /
    • 2009
  • The red-eye effect in photography occurs when using a photographic flash very close to the camera lens, in ambient low light due to in experience. Once occurred, the photographer needs to remove it with image tool that requires time consuming, skillful process. In this paper, we propose a new method to extract and remove such red-eye area automatically. Our method starts with transforming ROB space to YCbCr and HSI space and it extracts the face area by using skin color information. The target red-eye area is then extracted by applying 8-direction contour tracking algorithm and removed. The experiment shows our method's effectiveness.

Implementation of eye-controlled mouse by real-time tracking of the three dimensional eye-gazing point (3차원 시선 추적에 의한 시각 제어 마우스 구현 연구)

  • Kim Jae-Han
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.209-212
    • /
    • 2006
  • This paper presents design and implementation methods of the eye-controlled mouse using the real-time tracking of the three dimensional gazing point. The proposed method is based on three dimensional data processing of eye images in the 3D world coordinates. The system hardware consists of two conventional CCD cameras for acquisition of stereoscopic image and computer for processing. And in this paper, the advantages of the proposed algorithm and test results are described.

  • PDF

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF