• Title/Summary/Keyword: eye pairs

Search Result 26, Processing Time 0.036 seconds

Accuracy Investigation of RPC-based Block Adjustment Using High Resolution Satellite Images GeoEye-1 and WorldView-2 (고해상도 위성영상 GeoEye-1과 WorldView-2의 RPC 블록조정모델 정확도 분석)

  • Choi, Sun-Yong;Kang, Jun-Mook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.107-116
    • /
    • 2012
  • We investigated the accuracy in three dimensional geo-positioning derived by four high resolution satellite images acquired by two different sensors using the vendor-provided rational polynomial coefficients(RPC) based block adjustment in this research. We used two in-track stereo pairs of GeoEye-1 and WorldView-2 satellite and DGPS surveying data. In this experiment, we analyzed accuracies of RPC block adjustment models of two kinds of homogeneous stereo pairs, four kinds of heterogeneous stereo pairs, three 3 triplet image pairs, and one quadruplet image pair separately. The result shows that the accuracies of the models are nearly same. The accuracy without any GCPs reaches about CEP(90) 2.3m and LEP(90) 2.5m and the accuracy with single GCP is about CEP(90) 0.3m and LEP(90) 0.5m.

Real-Time Eye Tracking Using IR Stereo Camera for Indoor and Outdoor Environments

  • Lim, Sungsoo;Lee, Daeho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3965-3983
    • /
    • 2017
  • We propose a novel eye tracking method that can estimate 3D world coordinates using an infrared (IR) stereo camera for indoor and outdoor environments. This method first detects dark evidences such as eyes, eyebrows and mouths by fast multi-level thresholding. Among these evidences, eye pair evidences are detected by evidential reasoning and geometrical rules. For robust accuracy, two classifiers based on multiple layer perceptron (MLP) using gradient local binary patterns (GLBPs) verify whether the detected evidences are real eye pairs or not. Finally, the 3D world coordinates of detected eyes are calculated by region-based stereo matching. Compared with other eye detection methods, the proposed method can detect the eyes of people wearing sunglasses due to the use of the IR spectrum. Especially, when people are in dark environments such as driving at nighttime, driving in an indoor carpark, or passing through a tunnel, human eyes can be robustly detected because we use active IR illuminators. In the experimental results, it is shown that the proposed method can detect eye pairs with high performance in real-time under variable illumination conditions. Therefore, the proposed method can contribute to human-computer interactions (HCIs) and intelligent transportation systems (ITSs) applications such as gaze tracking, windshield head-up display and drowsiness detection.

Fine Structural Analysis of Principal and Secondary Eyes in Wandering Spider, Pardosa astrigera (배회성 거미 (Pardosa astrigera) 주안과 부안의 미세구조적 분석)

  • Jeong, Moon-Jin;Lim, Do-Seon;Moon, Myung-Jin
    • Applied Microscopy
    • /
    • v.30 no.1
    • /
    • pp.1-9
    • /
    • 2000
  • The wandering spider, Pardosa astrigera, had four pairs of ocelli that arranged in three rows on the cephalothorax. Along the anterior margin lay a pair of small anterior median (AM) eye flanked on each side by anterior lateral (AL) eye. Two large posterior median (PM) eye was situated on the clypeus behind the anterior row and still more posteriorly was a pair of posterior lateral (PL) eye. The visual cell of retina consisted of cell body, rhabdome, and intermediate segment. Bipolar neuron was found in anterior median eye (principal eye) and unipolar neuron in others (secondary eye). Rhabdome showed that arranged in PMeye and PLeye. But rhabdomes of AMeye and ALeye were irregular in retina. Except AMeye, incontinuous tapetum found in ALeye, PMeye, PLeye. Anterior median eye was similar to anterior lateral eye in length and posterior median eye similar to posterior lateral eye. Component size of eye were similar to 4 pairs eye in cornea. Size of lens, cell body, and rhabdome was similar not only anterior median eye and anterior lateral eye but also posterior median eye and posterior lateral eye. Vitreous body was large posterior median eye than others.

  • PDF

Eye Detection using Edge Information and SVM (에지 정보와 SVM의 결합을 통한 눈 검출)

  • 지형근;이경희;정용화
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.347-350
    • /
    • 2002
  • This paper describes eye detection algorithm using edge information and Support Vector Machine (SVM). We adopt an edge detection and labelling algorithm to detect isolated components. Detected candidate eye pairs finally verified by SVM using Radial Basis Function (RBF) kernel. A detection rate over the test set has been achieved more than 90%, and compared with template matching method. this proposed method significantly reduced FAR.

  • PDF

Real-Time Face Detection by Estimating the Eye Region Using Neural Network (신경망 기반 눈 영역 추정에 의한 실시간 얼굴 검출 기법)

  • 김주섭;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.21-24
    • /
    • 2001
  • In this paper, we present a fast face detection algorithm by estimating the eye region using neural network. To implement a real time face detection system, it is necessary to reduce search space. We limit the search space just to a few pairs of eye candidates. For the selection of them, we first isolate possible eye regions in the fast and robust way by modified histogram equalization. The eye candidates are paired to form an eye pair and each of the eye pair is estimated how close it is to a true eye pair in two aspects : One is how similar the two eye candidates are in shape and the other is how close each of them is to a true eye image A multi-layer perceptron neural network is used to find the eye candidate region's closeness to the true eye image. Just a few best candidates are then verified by eigenfaces. The experimental results show that this approach is fast and reliable. We achieved 94% detection rate with average 0.1 sec Processing time in Pentium III PC in the experiment on 424 gray scale images from MIT, Yale, and Yonsei databases.

  • PDF

Pupil Detection using PCA and Hough Transform

  • Jang, Kyung-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.2
    • /
    • pp.21-27
    • /
    • 2017
  • In this paper, we propose a pupil detection method using PCA(principal component analysis) and Hough transform. To reduce error to detect eyebrows as pupil, eyebrows are detected using projection function in eye region and eye region is set to not include the eyebrows. In the eye region, pupil candidates are detected using rank order filter. False candidates are removed by using symmetry. The pupil candidates are grouped into pairs based on geometric constraints. A similarity measure is obtained for two eye of each pair using PCA and hough transform, we select a pair with the smallest similarity measure as final two pupils. The experiments have been performed for 1000 images of the BioID face database. The results show that it achieves the higher detection rate than existing method.

Face Component Extraction Using Multiresolution Image (다해상도 영상을 이용한 얼굴 구성요소 추출)

  • Jang, Kyung-Shik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.12
    • /
    • pp.3675-3682
    • /
    • 1999
  • This paper proposes the method to extract face components without using the color information and the motion information in a gray image. A laplacian pyramid of the original image is built. Eye and nose candidates are extracted using only the gray information in a low resolution laplacian image and pairs are found that consist of two eye candidates and a nose one. At full resolution, horizontal and vortical edges are found in the regions of face components which are established using the candidates. Using those edge informations, face components are extracted. The experiments have been performed for images with various sizes and positions of face, and show very encouraging result.

  • PDF

Three-dimensional analysis of the arrangement of microtubules of the outer segment in the ciliary-type photoreceptor cell in the Onchidium dorsal eye

  • Katagiri, Nobuko;Shimatani, Yuichi;Katagiri, Yasuo
    • Journal of Photoscience
    • /
    • v.9 no.2
    • /
    • pp.284-286
    • /
    • 2002
  • The inverted retina of the Onchidium dorsal eye (DE) is composed only of ciliary-type photoreceptor cells (CC's). The outer segment (OS) of the CC is a concentric lamellar structure consisting of many modified ciliary membranes and stains positively with anti-$\beta$-tubulin antibody. Near the base of the OS there are about 30 basal bodies each connecting individually to a cilium. The cilia are rod-shaped at the base, progressing upwards to a flattened sheet-like shape with increasing surface area. Three-dimensional analysis on serial sections demonstrates the ladle-shape of a modified cilium. Many modified cilia wrap around each other like the leaves of a cabbage. Nine pairs of microtubules (MT's) are located regularly in a ring at the base of the cilium, gradually losing their regular arrangement towards the periphery, where they separate into two subgroups that are contained within two swollen portions of a modified cilium. Within the CC of the Onchidium DE, MT's in the modified cilium exist as two poles extending longitudinally in a thin expanded ciliary membrane. This arrangement may support the photoreceptive OS and serve to maintain its structural integrity.

  • PDF

Pupil Detection using Hybrid Projection Function and Rank Order Filter (Hybrid Projection 함수와 Rank Order 필터를 이용한 눈동자 검출)

  • Jang, Kyung-Shik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.8
    • /
    • pp.27-34
    • /
    • 2014
  • In this paper, we propose a pupil detection method using hybrid projection function and rank order filter. To reduce error to detect eyebrows as pupil, eyebrows are detected using hybrid projection function in face region and eye region is set to not include the eyebrows. In the eye region, potential pupil candidates are detected using rank order filter and then the positions of pupil candidates are corrected. The pupil candidates are grouped into pairs based on geometric constraints. A similarity measure is obtained for two eye of each pair using template matching, we select a pair with the smallest similarity measure as final two pupils. The experiments have been performed for 700 images of the BioID face database. The pupil detection rate is 92.4% and the proposed method improves about 21.5% over the existing method..

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.