• Title/Summary/Keyword: Iris Image

Search Result 144, Processing Time 0.031 seconds

A Study on Extraction of Irregular Iris Patterns (비정형 홍채 패턴 분리에 관한 연구)

  • Won, Jung-Woo;Cho, Seong-Won;Kim, Jae-Min;Baik, Kang-Chul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.2
    • /
    • pp.169-174
    • /
    • 2008
  • Recently, biometric systems are of interest for the reliable security system. Iris recognition technology is one of the biometric system with the highest reliability. Various iris recognition methods have been proposed for automatic personal identification and verification. These methods require accurate iris segmentation for successful processing because the iris is a small part of an acquired image. The iris boundaries have been parametrically modeled and subsequently detected by circles or parabolic arcs. Since the iris boundaries have a wide range of edge contrast and irregular border shapes, the assumption that they can be fit to circles or parabolic arcs is not always valid. In some cases, the shape of a dilated pupil is slightly different from a constricted one. This is especially true when the pupil has an irregular shape. This is why this research is important. This paper addresses how to accurately detect iris boundaries for improved iris recognition, which is robust to noises.

Iris Localization using the Pupil Center Point based on Deep Learning in RGB Images (RGB 영상에서 딥러닝 기반 동공 중심점을 이용한 홍채 검출)

  • Lee, Tae-Gyun;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.135-142
    • /
    • 2020
  • In this paper, we describe the iris localization method in RGB images. Most of the iris localization methods are developed for infrared images, thus an iris localization method in RGB images is required for various applications. The proposed method consists of four stages: i) detection of the candidate irises using circular Hough transform (CHT) from an input image, ii) detection of a pupil center based on deep learning, iii) determine the iris using the pupil center, and iv) correction of the iris region. The candidate irises are detected in the order of the number of intersections of the center point candidates after generating the Hough space, and the iris in the candidates is determined based on the detected pupil center. Also, the error due to distortion of the iris shape is corrected by finding a new boundary point based on the detected iris center. In experiments, the proposed method has an improved accuracy about 27.4% compared to the CHT method.

Iris Pattern Positioning with Preserved Edge Detector and Overlay Matching

  • Ryu, Kwang-Ryol
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.3
    • /
    • pp.339-342
    • /
    • 2010
  • An iris image pattern positioning with preserved edge detector, ring zone and clock zone, frequency distribution and overlay matching is presented in this paper. Edge detector is required to be powerful and detail. That is proposed by overlaying Canny with LOG (CLOG). The two reference patterns are made from allocating each gray level on the clock zone and ring zone respectively. The normalized target image is overlaid with the clock zone reference pattern and the ring zone pattern to extract overlapped number, and make a matched frequency distribution to look through a symptom and position of human organ and tissue. The iterating experiments result in the ring and clock zone positioning evaluation.

A New Ocular Torsion Measurement Method Using Iterative Optical Flow

  • Lee InBum;Choi ByungHun;Kim SangSik;Park Kwang Suk
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.3
    • /
    • pp.133-138
    • /
    • 2005
  • This paper presents a new method for measuring ocular torsion using the optical flow. Images of the iris were cropped and transformed into rectangular images that were orientation invariant. Feature points of the iris region were selected from a reference and a target image, and the shift of each feature was calculated using the iterative Lucas-Kanade method. The feature points were selected according to the strength of the corners on the iris image. The accuracy of the algorithm was tested using printed eye images. In these images, torsion was measured with $0.15^{\circ}$ precision. The proposed method shows robustness even with the gaze directional changes and pupillary reflex environment of real-time processing.

A Realization for the Iris Image Recognition System Using the DSP Processor (DSP프로세서를 이용한 홍채영상인식 시스템구현에 관한 연구)

  • Kim, Ja-Hwan;Jung, Eun-Suk;Sung, Kyeong;Ryu, Kwang-Ryol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.4
    • /
    • pp.833-837
    • /
    • 2004
  • The iris image recognition system realization using DSP processor for the faster real-time processing is presented in this paper. The system is composed of CCD camera, DSP processing and network part to link the communication. The system based on high speed DSP processor leads the iris recognition processing time to be faster. The simulation results in 0.9sec below approximately.

A Realization for the Iris Image Recognition System Using the DSP Processor (DSP프로세서를 이용한 홍채영상 인식 시스템 구현에 관한 연구)

  • Kim, Ja-Hwan;Jung, Eun-Suk;Sung, Kyeong;Ryu, Kwang-Ryol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.129-132
    • /
    • 2004
  • The iris image recognition system realization using DSP processor(TMS320DM642) for the faster real-time processing is presented on this paper. The system is composed of CCD camera, DSP processing and network part to link the communication. The system leads the iris recognition processing time to be faster. The simulation results in 0.9sec below approximately.

  • PDF

A Study on Iris Image Restoration Using Focus Value of Iris Image (영상의 초점값을 이용한 홍채 영상 복원 연구)

  • Kang, Byung-Jun;Park, Kang-Ryoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.781-784
    • /
    • 2005
  • 홍채 인식은 동공과 흰자위 사이에 존재하는 도넛 모양의 홍채 패턴(Iris pattern)을 이용하여 자신인지 타인인지 판별하는 매우 신뢰도가 높은 생체인식기술 가운데 하나이다. 홍채 인식은 홍채 영상의 홍채 패턴으로부터 홍채 코드(Iris code)를 추출하여 인식하기 때문에 좋은 질의 홍채영상을 취득하는 것은 정확한 홍채 인식을 위해서 매우 중요하다. 이러한 홍채 영상의 질을 결정하는 중요한 요소 가운데 하나가 초점(focus)이다. 초점이 맞지 않아 흐려진(blurring) 영상은 홍채 인식에서 자신임에도 불구하고 타인으로 인식하는 FRR(false reject error)를 증가시킨다. 홍채 인식 시스템의 카메라는 고정 초점 방식과 가변 초점 방식이 있다. 고정 초점 방식은 초점렌즈가 고정되어 있어서 초점이 맞지 않는 영상을 취득할 경우 사용자에게 다시 요구하여 입력받도록 한다. 이는 사용자에게 불편을 초래한다. 가변 초점 방식은 사용자와의 거리를 측정하여 초점렌즈를 움직여서 초점이 잘 맞은 선명한 영상을 얻는다. 하지만, 초점렌즈를 움직이기 위해서 사용자와의 거리를 측정하는 센서와 초점렌즈를 움직이는 모터등과 같은 부가 장비가 필요하다. 따라서 카메라의 부피가 커지고, 가격이 상승하게 되는 문제점이 있다. 그리므로 본 논문은 고정 초점 카메라를 사용하여 부가 장비 없이 홍채 영상 복원 알고리즘을 사용하여 소프트웨어적으로 초점이 맞지 않아 흐려진 영상을 처리하는 방법을 제안한다. 본 논문은 초점값을 이용하여 열화(degradation)의 정도를 판단하였으며, 초점값(focus value)에 따라 점확산함수(point spread function)를 설계하여 홍채영상을 복원하였다.

  • PDF

Detection of Special Effects with Circular Moving Borders (원형의 이동 경계선을 가지는 특수효과 검출)

  • Jang, Seok-Woo;Byun, Si-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.7
    • /
    • pp.3184-3190
    • /
    • 2011
  • In this paper, we propose a method to detect Iris Round wipe transitions with circular moving borders in digital video data. The suggested method robustly extracts circular moving borders from the input image using improved Hough transform, and finally detects Iris Round wipes by effectively analyzing their moving directions and shapes. In order to evaluate the performance of the suggested algorithm, the experimental results show that the proposed method can effectively detect Iris Rounds with circular moving borders in various video data.

Measurement of Spatial Traffic Information by Image Processing (영상처리를 이용한 공간 교통정보 측정)

  • 권영탁;소영성
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.28-38
    • /
    • 2001
  • Traffic information can be broadly categorized into point information and spatial information. Point information can be obtained by chocking only the presence of vehicles at prespecified points(small area), whereas spatial information can be obtained by monitoring large area of traffic scene. To obtain spatial information by image processing, we need to track vehicles in the whole area of traffic scene. Image detector system based on global tracking consists of video input, vehicle detection, vehicle tracking, and traffic information measurement. For video input, conventional approaches used auto iris which is very poor in adaptation for sudden brightness change. Conventional methods for background generation do not yield good results in intersections with heave traffic and most of the early studies measure only point information. In this paper, we propose user-controlled iris method to remedy the deficiency of auto iris and design flame difference-based background generation method which performs far better in complicated intersections. We also propose measurement method for spatial traffic information such as interval volume/lime/velocity, queue length, and turning/forward traffic flow. We obtain measurement accuracy of 95%∼100% when applying above mentioned new methods.

  • PDF

Deep learning framework for bovine iris segmentation

  • Heemoon Yoon;Mira Park;Hayoung Lee;Jisoon An;Taehyun Lee;Sang-Hee Lee
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.167-177
    • /
    • 2024
  • Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.