• 제목/요약/키워드: model eye

검색결과 478건 처리시간 0.027초

Anti-Spoofing Method for Iris Recognition by Combining the Optical and Textural Features of Human Eye

  • Lee, Eui Chul;Son, Sung Hoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권9호
    • /
    • pp.2424-2441
    • /
    • 2012
  • In this paper, we propose a fake iris detection method that combines the optical and textural features of the human eye. To extract the optical features, we used dual Purkinje images that were generated on the anterior cornea and the posterior lens surfaces based on an analytic model of the human eye's optical structure. To extract the textural features, we measured the amount of change in a given iris pattern (based on wavelet decomposition) with regard to the direction of illumination. This method performs the following two procedures over previous researches. First, in order to obtain the optical and textural features simultaneously, we used five illuminators. Second, in order to improve fake iris detection performance, we used a SVM (Support Vector Machine) to combine the optical and textural features. Through combining the features, problems of single feature based previous works could be solved. Experimental results showed that the EER (Equal Error Rate) was 0.133%.

Acceptable Values of Kappa for Comparison of Two Groups

  • Seigel Daniel G.;Podgor Marvin J.;Remaley Nancy A.
    • 대한예방의학회:학술대회논문집
    • /
    • 대한예방의학회 1994년도 교수 연수회(역학)
    • /
    • pp.129-136
    • /
    • 1994
  • A model was developed for a simple clinical trial in which graders had defined probabilities of misclassifying pathologic material to disease present or absent. The authors compared Kappa between graders, and efficiency and bias in the clinical trial in the presence of misclassification. Though related to bias and efficiency, Kappa did not predict these two statistics well. These results pertain generally to evaluation of systems for encoding medical information, and the relevance of Kappa in determining whether such systems are ready for use in comparative studies. The authors conclude that, by itself, Kappa is not informative Enough to evaluate the appropriateness of a grading scheme for comparative studies. Additional, and perhaps difficult, questions must be addressed for such evaluation.

  • PDF

Product Images Attracting Attention: Eye-tracking Analysis

  • Pavel Shin;Kil-Soo Suh;Hyunjeong Kang
    • Asia pacific journal of information systems
    • /
    • 제29권4호
    • /
    • pp.731-751
    • /
    • 2019
  • This study examined the impact of various product photo features on the attention of potential consumers in online apparel retailers' environment. Recently, the method of apparel's product photo representation in online shopping stores has been changed a lot from the classic product photos in the early days. In order to investigate if this shift is effective in attracting consumers' attention, we examined the related theory and verified its effect through laboratory experiments. In particular, experiment data was collected and analyzed using eye tracking technology. According to the results of this study, it was shown that the product photos with asymmetry are more attractive than symmetrical photos, well emphasized object within a photo more attractive than partially emphasized, smiling faces are more attractive for customer than emotionless and sad, and photos with uncentered models focus more consumer's attention than photos with model in the center. These results are expected to help design internet shopping stores to gaze more customers' attention.

작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험 (Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images)

  • 박소연;김예슬;나상일;박노욱
    • 대한원격탐사학회지
    • /
    • 제36권5_1호
    • /
    • pp.807-821
    • /
    • 2020
  • 이 연구에서는 작물 모니터링을 위한 시계열 고해상도 영상 구축을 위해 기존 중저해상도 위성영상의 융합을 위해 개발된 대표적인 시공간 융합 모델의 적용성을 평가하였다. 특히 시공간 융합 모델의 원리를 고려하여 입력 영상 pair의 특성 차이에 따른 모델의 예측 성능을 비교하였다. 농경지에서 획득된 시계열 Sentinel-2 영상과 RapidEye 영상의 시공간 융합 실험을 통해 시공간 융합 모델의 예측 성능을 평가하였다. 시공간 융합 모델로는 Spatial and Temporal Adaptive Reflectance Fusion Model(STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model(SPSTFM)과 Flexible Spatiotemporal DAta Fusion(FSDAF) 모델을 적용하였다. 실험 결과, 세 시공간 융합 모델은 예측 오차와 공간 유사도 측면에서 서로 다른 예측 결과를 생성하였다. 그러나 모델 종류와 관계없이, 예측 시기와 영상 pair가 획득된 시기 사이의 시간 차이보다는 예측 시기의 저해상도 영상과 영상 pair의 상관성이 예측 능력 향상에 더 중요한 것으로 나타났다. 또한 작물 모니터링을 위해서는 오차 전파 문제를 완화할 수 있는 식생지수를 시공간 융합의 입력 자료로 사용해야 함을 확인하였다. 이러한 실험 결과는 작물 모니터링을 위한 시공간 융합에서 최적의 영상 pair 및 입력 자료 유형의 선택과 개선된 모델 개발의 기초정보로 활용될 수 있을 것으로 기대된다.

글읽기에서 나타난 성인과 청소년의 중심와주변 정보처리: 고정시간 분포에 대한 확산모형 분석 (Parafovea Information Processing of Adults and Adolescents in Reading: Diffusion Model Analysis on Distributions of Eye Fixation Durations)

  • 주혜리;고성룡
    • 인지과학
    • /
    • 제31권4호
    • /
    • pp.103-136
    • /
    • 2020
  • 이 연구의 목적은 글읽기의 주요한 현상인 중심와주변 미리보기 효과(parafovea preview effect)의 중요성을 검증하고, 성인과 청소년을 대상으로 안구운동추적 실험을 통해 연령이 다른 두 집단의 중심와주변 미리보기 효과를 비교해 보고자 한다. 또한 안구운동 추적실험을 통해 얻은 결과자료를 단일경계 확산모형(diffusion model)의 시작점(starting point) 파라미터로 설명되는지 확인할 것이다. 실험은 경계선 기법(boundary technique)을 이용하여 중심와주변 정보처리를 관찰하였다. 실험 1에서는 중심와주변에 미리보기 정보로 고빈도 단어를 제시하는 것과 미리보기 정보를 차폐하는 것을 비교하였다. 실험 2에서는 중심와주변 미리보기 정보로 저빈도 단어를 제공하였고, 중심와주변 미리보기를 차폐한 것과 비교하였다. 두 실험 결과, 청소년 집단과 성인 집단에서 중심와주변에 정보가 주어졌을 때 중심와주변 미리보기 이득 효과를 확인하였다. 또한 중심와주변에 높인 정보 성질, 즉 단어의 빈도에 따라 두 집단의 첫고정시간, 단일고정시간, 주시시간에서 고정시간 차이를 살펴보았다. 두 실험에서 얻은 첫고정시간 데이터를 분위수로 나누고 단일경계 확산모형에 fitting한 결과, 중심와주변 정보처리가 시작점 파라미터로 설명되는 것을 확인하였다.

Real-Time Eye Detection and Tracking Under Various Light Conditions

  • Park Ho Sik;Nam Kee Hwan;Seol Jeung Bo;Cho Hyeon Seob;Ra Sang Dong;Bae Cheol Soo
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2004년도 학술대회지
    • /
    • pp.862-866
    • /
    • 2004
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. Based on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

  • PDF

PCB Module에서의 Processor와 DDR2 메모리 사이에 인터페이스되는 고속신호 품질확보를 위한 SI해석 (SI Analysis for Quality Assurance of High-Speed Signal Interfaced Between Processor and DDR2 Memory on PCB Module)

  • 하현수;김민성;하판봉;김영희
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2013년도 추계학술대회
    • /
    • pp.386-389
    • /
    • 2013
  • 본 논문에서는 Processor와 DDR2 사이에 인터페이스되는 고속신호의 Signal Integrity 해석을 위해 IC Chip의 IBIS Model과 Transmission Line의 S-Parameter를 이용하여 고속신호의 Transient 해석을 수행하고 Eye Diagram을 생성하였다. 고속으로 동작하는 DQ, DQS/DQSb 신호 및 Clock, Address, Control 신호의 Eye Diagram에서 Setup/Hold 구간동안 Timing Margin과 Voltage Margin을 측정하였으며 Over-/Under-shoot 및 Differential 신호의 Cross Point가 Spec에 만족하는지 확인하여 신호의 품질을 확보하였다.

  • PDF

Bird's Eye View Semantic Segmentation based on Improved Transformer for Automatic Annotation

  • Tianjiao Liang;Weiguo Pan;Hong Bao;Xinyue Fan;Han Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.1996-2015
    • /
    • 2023
  • High-definition (HD) maps can provide precise road information that enables an autonomous driving system to effectively navigate a vehicle. Recent research has focused on leveraging semantic segmentation to achieve automatic annotation of HD maps. However, the existing methods suffer from low recognition accuracy in automatic driving scenarios, leading to inefficient annotation processes. In this paper, we propose a novel semantic segmentation method for automatic HD map annotation. Our approach introduces a new encoder, known as the convolutional transformer hybrid encoder, to enhance the model's feature extraction capabilities. Additionally, we propose a multi-level fusion module that enables the model to aggregate different levels of detail and semantic information. Furthermore, we present a novel decoupled boundary joint decoder to improve the model's ability to handle the boundary between categories. To evaluate our method, we conducted experiments using the Bird's Eye View point cloud images dataset and Cityscapes dataset. Comparative analysis against stateof-the-art methods demonstrates that our model achieves the highest performance. Specifically, our model achieves an mIoU of 56.26%, surpassing the results of SegFormer with an mIoU of 1.47%. This innovative promises to significantly enhance the efficiency of HD map automatic annotation.

Biometric identification of Black Bengal goat: unique iris pattern matching system vs deep learning approach

  • Menalsh Laishram;Satyendra Nath Mandal;Avijit Haldar;Shubhajyoti Das;Santanu Bera;Rajarshi Samanta
    • Animal Bioscience
    • /
    • 제36권6호
    • /
    • pp.980-989
    • /
    • 2023
  • Objective: Iris pattern recognition system is well developed and practiced in human, however, there is a scarcity of information on application of iris recognition system in animals at the field conditions where the major challenge is to capture a high-quality iris image from a constantly moving non-cooperative animal even when restrained properly. The aim of the study was to validate and identify Black Bengal goat biometrically to improve animal management in its traceability system. Methods: Forty-nine healthy, disease free, 3 months±6 days old female Black Bengal goats were randomly selected at the farmer's field. Eye images were captured from the left eye of an individual goat at 3, 6, 9, and 12 months of age using a specialized camera made for human iris scanning. iGoat software was used for matching the same individual goats at 3, 6, 9, and 12 months of ages. Resnet152V2 deep learning algorithm was further applied on same image sets to predict matching percentages using only captured eye images without extracting their iris features. Results: The matching threshold computed within and between goats was 55%. The accuracies of template matching of goats at 3, 6, 9, and 12 months of ages were recorded as 81.63%, 90.24%, 44.44%, and 16.66%, respectively. As the accuracies of matching the goats at 9 and 12 months of ages were low and below the minimum threshold matching percentage, this process of iris pattern matching was not acceptable. The validation accuracies of resnet152V2 deep learning model were found 82.49%, 92.68%, 77.17%, and 87.76% for identification of goat at 3, 6, 9, and 12 months of ages, respectively after training the model. Conclusion: This study strongly supported that deep learning method using eye images could be used as a signature for biometric identification of an individual goat.

합리적 보험료 산정을 위한 OpenCV기반 반려동물 건강나이 예측 시스템 (OpenCV-Based Pets Health Age Prediction System for Reasonable Insurance Premium Calculation)

  • 지민규;김요한;박승민
    • 한국전자통신학회논문지
    • /
    • 제19권3호
    • /
    • pp.577-582
    • /
    • 2024
  • 국내 펫 보험은 2007년 첫 도입되어 현재 2024년 지금까지 많은 보험상품들이 생겼고 펫 보험 시장은 매년 증가하고 있는 추세이다. 하지만 실상은 2022년 기준 펫 보험 가입률은 전체 반려인의 0.8%이며 반려인들은 비싼 보험료 및 보장내역, 까다로운 가입 기준으로 인해 펫 보험 가입을 꺼리고 있다. 본 논문에서는 반려동물 안구질환 및 질환의 위치를 인식하고 건강나이를 예측 가능한 모델링을 제안한다. 먼저 EfficientNet을 활용해 반려동물의 안구질환을 인식하고 OpenCV를 활용 질환의 발병 위치와 크기를 인식하여 반려동물의 건강나이를 산출한다. 산출된 해당 건강나이를 바탕으로 보험사에서 펫 보험료 산정 시 보조하는 역할을 하고자 한다. 이 모델링은 반려동물 안구질환 및 건강나이로 합리적인 펫 보험 가격 산정 보조가 가능하다.