• Title/Summary/Keyword: Eye Image

Search Result 821, Processing Time 0.02 seconds

Optimum design of the finite schematic eye using spherical aberration (구면수차를 이용한 정밀모형안의 최적화)

  • 김상기;박성찬
    • Korean Journal of Optics and Photonics
    • /
    • v.13 no.3
    • /
    • pp.266-271
    • /
    • 2002
  • The finite schematic eye based on spherical aberration and Stiles-Crawford effect is designed by an optimization method. It consists of four aspherical surfaces. The radius of curvature, thickness, asphericity, and spherical aberration are used as constraints in the optimization process. Stiles-Crawford effect in the pupil is considered as a weighting value for optimum design. The designed schematic eye has effective focal length of 20.8169 mm, back focal length of 15.4820 mm, front focal length of -13.8528 mm, and image distance of 15.7150 mm. When the pupil diameter is 4 mm, the diameter of entrance pupil and exit pupil are 4.6919 mm and 4.2395 mm, respectively. From the data of 75 measured Korean emmetropic eyes, this finite schematic eye is designed first in Korea.

A Human-Robot Interface Using Eye-Gaze Tracking System for People with Motor Disabilities

  • Kim, Do-Hyoung;Kim, Jae-Hean;Yoo, Dong-Hyun;Lee, Young-Jin;Chung, Myung-Jin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.4
    • /
    • pp.229-235
    • /
    • 2001
  • Recently, service area has been emerging field f robotic applications. Even though assistant robots play an important role for the disabled and the elderly, they still suffer from operating the robots using conventional interface devices such as joysticks or keyboards. In this paper we propose an efficient computer interface using real-time eye-gaze tracking system. The inputs to the proposed system are images taken by a camera and data from a magnetic sensor. The measured data is sufficient to describe the eye and head movement because the camera and the receiver of a magnetic sensor are stationary with respect to the head. So the proposed system can obtain the eye-gaze direction in spite of head movement as long as the distance between the system and the transmitter of a magnetic position sensor is within 2m. Experimental results show the validity of the proposed system in practical aspect and also verify the feasibility of the system as a new computer interface for the disabled.

  • PDF

Estimating Leaf Area Index of Paddy Rice from RapidEye Imagery to Assess Evapotranspiration in Korean Paddy Fields

  • Na, Sang-Il;Hong, Suk Young;Kim, Yi-Hyun;Lee, Kyoung-Do;Jang, So-Young
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.46 no.4
    • /
    • pp.245-252
    • /
    • 2013
  • Leaf area index (LAI) is important in explaining the ability of crops to intercept solar energy for biomass production, amount of plant transpiration, and in understanding the impact of crop management practices on crop growth. This paper describes a procedure for estimating LAI as a function of image-derived vegetation indices from temporal series of RapidEye imagery obtained from 2010 to 2012 using empirical models in a rice plain in Seosan, Chungcheongnam-do. Rice plants were sampled every two weeks to investigate LAI, fresh and dry biomass from late May to early October. RapidEye images were taken from June to September every year and corrected geometrically and atmospherically to calculate normalized difference vegetation index (NDVI). Linear, exponential, and expolinear models were developed to relate temporal satellite NDVIs to measured LAI. The expolinear model provided more accurate results to predict LAI than linear or exponential models based on root mean square error. The LAI distribution was in strong agreement with the field measurements in terms of geographical variation and relative numerical values when RapidEye imagery was applied to expolinear model. The spatial trend of LAI corresponded with the variation in the vegetation growth condition.

Eye Tracking Research on Cinemagraph e-Magazine

  • Park, Ji Seob;Bae, Jin Hwa;Cho, Kwang Su
    • Agribusiness and Information Management
    • /
    • v.7 no.2
    • /
    • pp.1-11
    • /
    • 2015
  • This study has performed a comparative analysis between groups based on Time To First Fixation, Fixation Duration, Fixation Count and Total Visit Duration, which are eye-tracking analysis indicators on what visual attention is shown compared to the e-magazine produced as regular images related to e-magazines in which experiment subjects have applied cinemagraph images as eye tracking research on the e-magazine produced with cinemagraph images and e-magazines produced with regular images. The experiment sample used e-magazines composed of nine pages while AOI (area of interest) has been set up on each page by classifying image and text regions. A combined 30 people took part in the experiment, which was performed by randomly assigning 15 to the experiment group and 15 to the control group. According to the results of the analysis, the experiment group recorded a shorter time than the control group on the e-magazine produced with cinemagraph images through Time To First Fixation. Though no significant difference was found between the experiment and control groups in Fixation Duration, a substantial difference did appear between Fixation Duration and Total Visit Duration.

Fast Eye-Detection Algorithm for Embedded System (임베디드시스템을 위한 고속 눈검출 알고리즘)

  • Lee, Seung-Ik
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.164-168
    • /
    • 2007
  • In this paper, we propose the eye detection algorithms which can apply to the Real-Time Embedded systems. To detect the eye region, the feature vectors are obtained at the first step and then, PCA(Principal Component Analysis) and amplitude projection method is applied to composite the feature vectors. In the decision state, the estimated probability density functions (PDFs) are applied by the proposed Bayesian method to detect eye region in an image from the CCD camera. The simulation results show that our proposed method has a good detection rate on the frontal face and this can be applied to the embedded system because of its small amount of the mathematical complexity.

  • PDF

An Effective Eye Location for Face Recognition (얼굴 인식을 위한 효과적인 눈 위치 추출)

  • Jung Jo Nam;Rhee Phill Kyu
    • The KIPS Transactions:PartB
    • /
    • v.12B no.2 s.98
    • /
    • pp.109-114
    • /
    • 2005
  • Many researchers have been interested in user authentication using biometric information, and face recognition is a lively field of study of ones in the latest biometric recognition field because of advantage that it can recognize who he/she is without touching machinery. This paper proposes method to extract eye location effectively at face detection step that is precedence work of face recognition. The iterative threshold selection was adopted to get a proper binary image and also the Gaussian filter was used to intensify the properties of eyes to extract an eye location. The correlation was adopted to verify if the eye location is correct or not. Extraction of an eye location that propose in paper as well as accuracy, considered so that may can apply to online system and showed satisfactory performance as result that apply to on line system.

Real-Time Eye Detection and Tracking Under Various Light Conditions (다양한 조명하에서 실시간 눈 검출 및 추적)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.456-463
    • /
    • 2004
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tacking methodology that works under variable and realistic lighting conditions. Based on combining the bright-Pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils ale not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tacking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

The Relationship between Visual Perception and Emotion from Fear Appeals and Size of Warning Images on Cigarette Packages

  • Hwang, Mi Kyung;Jin, Xin;Zhou, Yi Mou;Kwon, Mahn Woo
    • Journal of Multimedia Information System
    • /
    • v.9 no.2
    • /
    • pp.137-144
    • /
    • 2022
  • This research aims to identify the relationship between visual perception and emotion by the types of fear responses elicited from warning images on cigarette packages as well as the effectiveness of the size of such images through questionnaires and eye-tracking experiments with twenty university students from the colleges based in Busan. The research distinguished and analyzed the warning images as rational appeals and emotional appeals by the degree of fear and disgust and the result concurred with the research conclusions of Maynard that people would naturally avoid eye contact when presented with a warning image on cigarette packages. Also, eye avoidance was highly identified with larger (75%) warning images. While the previous research mostly adopted the self-rated validation method, this research tried to make the methodology more objective by adopting both questionnaires and eye-tracking experiments. Through this research, authors contribute to finding effective warning images on cigarette packages in a way to increase public awareness of the dangers of smoking and discourage smoking. Further research is recommended to explore the effectiveness of using explicit images on cigarette packages by the types of smokers such as heavy smokers, normal smokers, and non-smokers.

Visual Performances of the Corrected Navarro Accommodation-Dependent Finite Model Eye (안구의 굴절능 조절을 고려한 수정된 Navarro 정밀모형안의 시성능 분석)

  • Choi, Ka-Ul;Song, Seok-Ho;Kim, Sang-Gee
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.5
    • /
    • pp.337-344
    • /
    • 2007
  • In recent years, there has been rapid progress in different areas of vision science, such as refractive surgical procedures, contact lenses and spectacles, and near vision. This progress requires a highly accurate modeling of optical performance of the human eyes in different accommodation states. A new novel model-eye was designed using the Navarro accommodation-dependent finite model eye. For each of the vergence distances, ocular wavefront error, accommodative response, and visual acuity were calculated. Using the new model eye ocular wavefront error, accommodation dative response, and visual acuity are calculated for six vergence stimuli, -0.17D, 1D, 2D, 3D, 4D and -5D. Also, $3^{rd}\;and\;4^{th}$ order aberrations, modulation transfer function, and visual acuity of the accommodation-dependent model eye were analyzed. These results are well-matched to anatomical, biometric, and optical realities. Our corrected accommodation-dependent model-eye may provide a more accurate way to evaluate optical transfer functions and optical performances of the human eye.

Deep Learning-based Gaze Direction Vector Estimation Network Integrated with Eye Landmark Localization (딥 러닝 기반의 눈 랜드마크 위치 검출이 통합된 시선 방향 벡터 추정 네트워크)

  • Joo, Heeyoung;Ko, Min-Soo;Song, Hyok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.748-757
    • /
    • 2021
  • In this paper, we propose a gaze estimation network in which eye landmark position detection and gaze direction vector estimation are integrated into one deep learning network. The proposed network uses the Stacked Hourglass Network as a backbone structure and is largely composed of three parts: a landmark detector, a feature map extractor, and a gaze direction estimator. The landmark detector estimates the coordinates of 50 eye landmarks, and the feature map extractor generates a feature map of the eye image for estimating the gaze direction. And the gaze direction estimator estimates the final gaze direction vector by combining each output result. The proposed network was trained using virtual synthetic eye images and landmark coordinate data generated through the UnityEyes dataset, and the MPIIGaze dataset consisting of real human eye images was used for performance evaluation. Through the experiment, the gaze estimation error showed a performance of 3.9, and the estimation speed of the network was 42 FPS (Frames per second).