• Title/Summary/Keyword: Eye detection

Search Result 432, Processing Time 0.029 seconds

Design and Implementation of Face Direction Recognition System using Face Detection (얼굴 검출을 이용한 얼굴 방향 인식 시스템의 설계 및 구현)

  • Yum, Hyo Sub;Lee, Joo-Hyung;Hong, Min
    • Annual Conference of KIPS
    • /
    • 2012.11a
    • /
    • pp.583-585
    • /
    • 2012
  • 본 논문은 웹카메라를 이용하여 얼굴이 바라보고 있는 방향을 인식하는 시스템을 제안한다. 얼굴 검출 방법으로 Haar-like Face Detect를 이용하여 얼굴을 검출하고 전체 이미지에서 검출된 얼굴 영역만을 관심영역으로 설정하여 Haar-like Eye Detect를 이용하여 눈 영역을 검출하였다. 검출된 눈 위치에 대한 평균값으로 얼굴이 왼쪽 방향을 보고 있는지 오른쪽 방향을 보고 있는지를 판단하였다. 제안된 방법의 실험 결과, 얼굴 및 눈 영역을 비교적 정확하게 검출하였으며 계산된 눈 위치를 이용하여 얼굴 방향 인식에 대해서 우수한 성능을 보였다.

A Drowsy Driver Monitoring System through Eye Closure State Detection Algorithm on Mobile Device (모바일 환경에서 눈 폐쇄 상태 검출을 통한 졸음운전 감지)

  • Park, Yoo-Jin;Choi, Young-Ho;Cho, Hae-Hyun;Kim, Gye-Young
    • Annual Conference of KIPS
    • /
    • 2012.11a
    • /
    • pp.597-600
    • /
    • 2012
  • 본 연구의 목적은 눈 폐쇄 상태 검출 알고리즘을 개발하고, 그것을 바탕으로 모바일 환경의 졸음운전 감지 시스템을 구현하는 것이다. 개발한 알고리즘은 검출된 눈 영역의 이미지를 히스토그램 분석을 통해 실험적으로 얻은 문턱 값으로 이진화 시킨 후 운전자 눈의 폐쇄 상태를 판단한다. 구현한 시스템은 얼굴과 눈 검출이 완료된 상태에서 검출된 눈이 폐쇄 상태인지를 판단한다. 폐쇄 상태인 경우 이상태가 지속되면 시스템은 운전자가 졸음운전 상태임을 감지하고 경고해준다. 자원이 제한된 모바일의 특성상 이미지 처리의 정확성뿐만 아니라 처리속도의 효율성도 중요한데 이 특성에 맞는 알고리즘을 개발하였고, 이를 바탕으로 졸음운전 감지 시스템 구현에 성공하였다.

A Study on Analysis and Service of the Face Detection to Prevent Drowsiness (졸음방지를 위한 안면검출 해석과 서비스에 관한 연구)

  • Lee, Dae-Yeon;Lee, Soo-Yong;Park, Jong-Won;Kim, Jeong-Ho
    • Annual Conference of KIPS
    • /
    • 2020.11a
    • /
    • pp.508-510
    • /
    • 2020
  • 2015년도부터 2019년도까지 5년간 고속도로에서 1,079명의 사망자가 발생하였으며, 이중 졸음운전 및 주시 태만이 729명(67.6%)로 가장 많았다. 졸음운전 방지를 위해 휴게소, 졸음쉼터 등 노력하고 있으나 이러한 노력에도 졸음운전으로 인한 사고는 지금까지도 계속해서 발생하고 있다. 본 연구는 이러한 사고를 방지하기 위해 적외선 카메라를 이용한 영상 촬영하여 안면검출 해석과 서비스를 구현하였다. 안면검출을 통한 동공 상태의 여부와 적합한 수면 판단 기준으로 PERCLOS(Percentage of Eye Closure)을 적용하였다. 운전자의 동공의 장축과 단축의 비율이 1 : 0.35 미만 일 때, 운전자가 졸음상태라 판단하고 음성 알람을 통해 졸음방지를 개선할 수 있었다.

Advanced lane detection algorithm using YOLOPv2 and OpenCV (YOLOPv2 와 OpenCV 를 적용한 차선 검출 알고리즘)

  • Ho-Jae Kim ;Donggyu-Seo;Inhyuk Jeong;Yeongseok Hwang;Eunbyung Park
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.1165-1166
    • /
    • 2023
  • 본 논문에서는 YOLOPv2 를 기반으로 OpenCV 를 활용한 후처리 과정을 도입하여 차선 검출 성능을 극대화할 수 있는 알고리즘을 제안한다. 주요 단계로는 YOLOPv2 모델을 활용한 차선 인식, Bird's eye view 변환, Sobel 및 Morphology Filter 를 통한 왜곡 보정, Histogram 기반 차선 검출, 그리고 후처리 알고리즘 적용이 있다. 이 기술은 자율 주행 및 도로 정보 활용 분야에 활용 가능할 것으로 기대되며, 차선 검출 정확도를 향상시킬 수 있다.

Tamper Detection of Digital Images using Hash Functions (해쉬 함수를 이용한 디지털 영상의 위변조 검출)

  • Woo, Chan-Il;Lee, Seung-Dae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.7
    • /
    • pp.4516-4521
    • /
    • 2014
  • Digital watermarking for digital image authentication and integrity schemes are based on fragile watermarking and can detect any modifications in a watermark embedded image by comparing the embedded watermark with the regenerated watermark. Therefore, the digital watermark for image authentication and integrity should be erased easily when the image is changed by digital image processing, such as scaling or filtering etc. This paper proposes an effective tamper detection scheme for digital images. In the proposed scheme, the original image was divided into many non-overlapping $2{\times}2$ blocks. The digital watermark was divided into two LSB of each block and the image distortion was imperceptible to the human eye. The watermark extraction process can be used to determine if the watermarked image has been tampered. The experimental results successfully revealed the effectiveness of the proposed scheme.

Robust Real-time Face Detection Scheme on Various illumination Conditions (다양한 조명 환경에 강인한 실시간 얼굴확인 기법)

  • Kim, Soo-Hyun;Han, Young-Joon;Cha, Hyung-Tai;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.821-829
    • /
    • 2004
  • A face recognition has been used for verifying and authorizing valid users, but its applications have been restricted according to lighting conditions. In order to minimizing the restricted conditions, this paper proposes a new algorithm of detecting the face from the input image obtained under the irregular lighting condition. First, the proposed algorithm extracts an edge difference image from the input image where a skin color and a face contour are disappeared due to the background color or the lighting direction. In the next step, it extracts a face region using the histogram of the edge difference image and the intensity information. Using the intensity information, the face region is divided into the horizontal regions with feasible facial features. The each of horizontal regions is classified as three groups with the facial features(including eye, nose, and mouth) and the facial features are extracted using empirical properties of the facial features. Only when the facial features satisfy their topological rules, the face region is considered as a face. It has been proved by the experiments that the proposed algorithm can detect faces even when the large portion of face contour is lost due to the inadequate lighting condition or the image background color is similar to the skin color.

A Novel Eyelashes Removal Method for Improving Iris Data Preservation Rate (홍채영역에서의 홍채정보 보존율 향상을 위한 새로운 속눈썹 제거 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.10
    • /
    • pp.429-440
    • /
    • 2014
  • The iris recognition is a biometrics technology to extract and code an unique iris feature from human eye image. Also, it includes the technology to compare with other's various iris stored in the system. On the other hand, eyelashes in iris image are a external factor to affect to recognition rate of iris. If eyelashes are not removed exactly from iris area, there are two false recognitions that recognize eyelashes to iris features or iris features to eyelashes. Eventually, these false recognitions bring out a lot of loss in iris informations. In this paper, in order to solve that problems, we removed eyelashes by gabor filter that using for analysis of frequency feature and improve preservation rate of iris informations. By novel method to extract various features on iris area using angle, frequency, and gaussian parameter on gabor filter that is one of the filters for analysing frequency feature for an image, we could remove accurately eyelashes with various lengths and shapes. As the result, proposed method represents that improve about 4% than previous methods using GMM or histogram analysis in iris preservation rate.

Properties and Application of Azo based Dyes for Detecting Hazardous Acids (유해 산 검출용 아조계 색소의 특성 및 응용 연구)

  • Shin, Seung-Rim;Jun, Kun;An, Kyoung-Lyong;Kim, Sang Woong;Kim, Tae-Hwan;Seo, Dong Sung;Lee, Chang Ick
    • Textile Coloration and Finishing
    • /
    • v.33 no.2
    • /
    • pp.49-63
    • /
    • 2021
  • In this study, a convenient approach for sensitive, quick and simple detection of hazardous acids was investigated. A series of azo dyes were synthesized and applied as a chemosensor for the acid detection both on fibers and in solution. Various aniline, benzothiazole or isoxazole derivatives were used as diazo component and coupled with N-benzyl-N-ethylaniline or 2,2'-(phenylimino)bis-ethanol to give azo based dyes. The acid sensing phenomenon was observed by naked-eye and detection was further confirmed by UV-Vis spectrophotometer and hue difference(ΔH*) evaluation. The developed sensors showed a distinct and quick color change from yellow to magenta by addition of trace amounts of the hazardous acids. The absorption maxima was shifted to a longer wavelength by 70 ~ 150nm and hue difference(ΔH*) was 60 ~ 120°. A cotton fiber coated with Dye 1 exhibited excellent storage stability under various temperature(-30 ~ 43℃) and humidity(30 ~ 80%) conditions without discoloration and fading of the fiber sensors. Also the acid sensing properties were maintained.

Driver Drowsiness Detection System using Image Recognition and Bio-signals (영상 인식 및 생체 신호를 이용한 운전자 졸음 감지 시스템)

  • Lee, Min-Hye;Shin, Seong-Yoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.859-864
    • /
    • 2022
  • Drowsy driving, one of the biggest causes of traffic accidents every year, is accompanied by various factors. As a general method to check whether or not there is drowsiness, a method of identifying a driver's expression and driving pattern, and a method of analyzing bio-signals are being studied. This paper proposes a driver fatigue detection system using deep learning technology and bio-signal measurement technology. As the first step in the proposed method, deep learning is used to detect the driver's eye shape, yawning presence, and body movement to detect drowsiness. In the second stage, it was designed to increase the accuracy of the system by identifying the driver's fatigue state using the pulse wave signal and body temperature. As a result of the experiment, it was possible to reliably determine the driver's drowsiness and fatigue in real-time images.

Concealed information test using ERPs and pupillary responses (ERP와 동공 반응을 이용한 숨긴정보검사)

  • Eom, Jin-Sup;Park, Kwang-Bai;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.259-268
    • /
    • 2012
  • In a P300-based concealed information test (P300 CIT), the result of the test is greatly affected by the value of the probe stimulus. With a probe stimulus of low value, the detection rate decreases. The aim of this study was to determine whether the pupil-based concealed information test (Pupil CIT) could be used in addition to the P300 CIT for the probes of low value. Participants were told to choose one card from a deck of five cards (space 2, 3, 4, 5, 6), Then a P300 CIT and a Pupil CIT for the selected card were administered. P300s were measured at 3 scalp sites (Fz, Cz, and Pz), and the pupil sizes of left and right eyes were recorded. The P300 amplitude measured at Fz, Cz, and Pz was significantly different between the probe and irrelevant stimuli. And, in the Pupil CIT, the pupil size was also different between the two stimuli for both eyes. The detection rates of the P300 CIT were 44% at Fz and Cz sites and 36% at Pz site. And the detection rates of the Pupil CIT were 52% for the left eye and 60% for the right eye. There is a trend that the detection rate of the Pupil CIT was higher than that of the P300 CIT, but the difference didn't reach significance partly because of the relatively small sample size. The correlation between the decision based on the P300 CIT and that based on the Pupil CIT was not significant. As a conclusion, it is recommended to use a Pupil CIT instead of a P300 CIT when the value of the probe is low. And a combination of the measures may be superior to either one of them in detection rate.

  • PDF