• Title/Summary/Keyword: Eye detection

Search Result 432, Processing Time 0.026 seconds

Tiny and Blurred Face Alignment for Long Distance Face Recognition

  • Ban, Kyu-Dae;Lee, Jae-Yeon;Kim, Do-Hyung;Kim, Jae-Hong;Chung, Yun-Koo
    • ETRI Journal
    • /
    • v.33 no.2
    • /
    • pp.251-258
    • /
    • 2011
  • Applying face alignment after face detection exerts a heavy influence on face recognition. Many researchers have recently investigated face alignment using databases collected from images taken at close distances and with low magnification. However, in the cases of home-service robots, captured images generally are of low resolution and low quality. Therefore, previous face alignment research, such as eye detection, is not appropriate for robot environments. The main purpose of this paper is to provide a new and effective approach in the alignment of small and blurred faces. We propose a face alignment method using the confidence value of Real-AdaBoost with a modified census transform feature. We also evaluate the face recognition system to compare the proposed face alignment module with those of other systems. Experimental results show that the proposed method has a high recognition rate, higher than face alignment methods using a manually-marked eye position.

Deep learning based face mask recognition for access control (출입 통제에 활용 가능한 딥러닝 기반 마스크 착용 판별)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.395-400
    • /
    • 2020
  • Coronavirus disease 2019 (COVID-19) was identified in December 2019 in China and has spread globally, resulting in an ongoing pandemic. Because COVID-19 is spread mainly from person to person, every person is required to wear a facemask in public. On the other hand, many people are still not wearing facemasks despite official advice. This paper proposes a method to predict whether a human subject is wearing a facemask or not. In the proposed method, two eye regions are detected, and the mask region (i.e., face regions below two eyes) is predicted and extracted based on the two eye locations. For more accurate extraction of the mask region, the facial region was aligned by rotating it such that the line connecting the two eye centers was horizontal. The mask region extracted from the aligned face was fed into a convolutional neural network (CNN), producing the classification result (with or without a mask). The experimental result on 186 test images showed that the proposed method achieves a very high accuracy of 98.4%.

Stereo Vision-Based Obstacle Detection and Vehicle Verification Methods Using U-Disparity Map and Bird's-Eye View Mapping (U-시차맵과 조감도를 이용한 스테레오 비전 기반의 장애물체 검출 및 차량 검증 방법)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.86-96
    • /
    • 2010
  • In this paper, we propose stereo vision-based obstacle detection and vehicle verification methods using U-disparity map and bird's-eye view mapping. First, we extract a road feature using maximum frequent values in each row and column. And we extract obstacle areas on the road using the extracted road feature. To extract obstacle areas exactly we utilize U-disparity map. We can extract obstacle areas exactly on the U-disparity map using threshold value which consists of disparity value and camera parameter. But there are still multiple obstacles in the extracted obstacle areas. Thus, we perform another processing, namely segmentation. We convert the extracted obstacle areas into a bird's-eye view using camera modeling and parameters. We can segment obstacle areas on the bird's-eye view robustly because obstacles are represented on it according to ranges. Finally, we verify the obstacles whether those are vehicles or not using various vehicle features, namely road contacting, constant horizontal length, aspect ratio and texture information. We conduct experiments to prove the performance of our proposed algorithms in real traffic situations.

Face Detection Based on Thick Feature Edges and Neural Networks

  • Lee, Young-Sook;Kim, Young-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1692-1699
    • /
    • 2004
  • Many researchers have developed various techniques for detection of human faces in ordinary still images. Face detection is the first imperative step of human face recognition systems. The two main problems of human face detection are how to cutoff the running time and how to reduce the number of false positives. In this paper, we present frontal and near-frontal face detection algorithm in still gray images using a thick edge image and neural network. We have devised a new filter that gets the thick edge image. Our overall scheme for face detection consists of two main phases. In the first phase we describe how to create the thick edge image using the filter and search for face candidates using a whole face detector. It is very helpful in removing plenty of windows with non-faces. The second phase verifies for detecting human faces using component-based eye detectors and the whole face detector. The experimental results show that our algorithm can reduce the running time and the number of false positives.

  • PDF

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

Development of Sleepy Status Monitoring System using the Histogram and Edge Information of Eyes (눈의 히스토그램과 에지를 이용한 졸린 상태 감시 시스템 개발)

  • Kang, Su Min;Huh, Kyung Moo;Joo, Young-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.5
    • /
    • pp.361-366
    • /
    • 2016
  • In this paper, we propose a technique for drowsiness detection using the histogram and edge information of eyes. The drowsiness of vehicle drivers is the main cause of many vehicle accidents. Therefore, the checking of eye images in order to detect the drowsiness status of a driver is very important for preventing accidents. In our suggested method, we analyze the changes of the histograms and edges of eye region images, which are acquired using a CCD camera. The experimental results show that our proposed method enhances the accuracy of detecting drowsiness to nearly 99%, and can be used for preventing vehicle accidents caused by the driver's drowsiness.

Real Time Eye and Gaze Tracking (트래킹 Gaze와 실시간 Eye)

  • Min Jin-Kyoung;Cho Hyeon-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2004.11a
    • /
    • pp.234-239
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process fur each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

A Study on the Feature Region Segmentation for the Analysis of Eye-fundus Images (안저영상(眼低映像) 해석(解析)을 위한 특징영성(特徵領域)의 분할(分割)에 관한 연구(硏究))

  • Kang, Jeon-Kwun;Kim, Seung-Bum;Ku, Ja-Yl;Han, Young-Hwan;Hong, Hong-Seung
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1993 no.11
    • /
    • pp.27-30
    • /
    • 1993
  • Information about retinal blood vessels can be used in grading disease severity or as part of the process of automated diagnosis of diseases with ocular menifestations. In this paper, we address the problem of detecting retinal blood vessels and optic disk (papilla) in Eye-fundus images. We introduce an algorithm for feature extraction based on Fuzzy festering(FCM). The results ore compared to those obtained with other methods. The automatic detection of retinal blood vessels and optic disk in the Eye-fundus images could help physicians in diagnosing ocular diseases.

  • PDF

Eye Detection Based on Texture Information (텍스처 기반의 눈 검출 기법)

  • Park, Chan-Woo;Park, Hyun;Moon, Young-Shik
    • Annual Conference of KIPS
    • /
    • 2007.05a
    • /
    • pp.315-318
    • /
    • 2007
  • 자동 얼굴 인식, 표정 인식과 같은 얼굴 영상과 관련된 다양한 연구 분야는 일반적으로 입력 얼굴 영상에 대한 정규화가 필요하다. 사람의 얼굴은 표정, 조명 등에 따라 다양한 형태변화가 있어 입력 영상 마다 정확한 대표 특징 점을 찾는 것은 어려운 문제이다. 특히 감고 있는 눈이나 작은 눈 등은 검출하기 어렵기 때문에 얼굴 관련 연구에서 성능을 저하시키는 주요한 원인이 되고 있다. 이에 다양한 변화에 강건한 눈 검출을 위하여 본 논문에서는 눈의 텍스처 정보를 이용한 눈 검출 방법을 제안한다. 얼굴 영역에서 눈의 텍스처가 갖는 특성을 정의하고 두 가지 형태의 Eye 필터를 정의하였다. 제안된 방법은 Adaboost 기반의 얼굴 영역 검출 단계, 조명 정규화 단계, Eye 필터를 이용한 눈 후보 영역 검출 단계, 눈 위치 점 검출 단계 등 총 4단계로 구성된다. 실험 결과들은 제안된 방법이 얼굴의 자세, 표정, 조명 상태 등에 강건한 검출 결과를 보여주며 감은 눈 영상에서도 강건한 결과를 보여준다.

Mapping Studies on Visual Search, Eye Movement, and Eye track by Bibliometric Analysis

  • Rhie, Ye Lim;Lim, Ji Hyoun;Yun, Myung Hwan
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.5
    • /
    • pp.377-399
    • /
    • 2015
  • Objective: The aim of this study is to understand and identify the critical issues in vision research area using content analysis and network analysis. Background: Vision, the most influential factor in information processing, has been studied in a wide range of area. As studies on vision are dispersed across a broad area of research and the number of published researches is ever increasing, a bibliometric analysis towards literature would assist researchers in understanding and identifying critical issues in their research. Method: In this study, content and network analysis were applied on the meta-data of literatures collected using three search keywords: 'visual search', 'eye movement', and 'eye tracking'. Results: Content analysis focuses on extracting meaningful information from the text, deducting seven categories of research area; 'stimuli and task', 'condition', 'measures', 'participants', 'eye movement behavior', 'biological system', and 'cognitive process'. Network analysis extracts relational aspect of research areas, presenting characteristics of sub-groups identified by community detection algorithm. Conclusion: Using these methods, studies on vision were quantitatively analyzed and the results helped understand the overall relation between concepts and keywords. Application: The results of this study suggests that the use of content and network analysis helps identifying not only trends of specific research areas but also the relational aspects of each research issue while minimizing researchers' bias. Moreover, the investigated structural relationship would help identify the interrelated subjects from a macroscopic view.