• Title/Summary/Keyword: eye recognition

Search Result 297, Processing Time 0.031 seconds

Intelligent CCTV for Port Safety, "Smart Eye" (항만 안전을 위한 지능형 CCTV, "Smart Eye")

  • Baek, Seung-Ho;Ji, Yeong-Il;Choi, Han-Saem
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.1056-1058
    • /
    • 2022
  • 본 연구는 항만에서 안전 수칙을 위반하여 발생하는 사고 및 이상행동을 실시간 탐지를 수행한 후 위험 상황을 관리자가 신속하고 정확하게 대처할 수 있도록 지원하는 지능형 CCTV, Smart Eye를 제안한다. Smart Eye는 컴퓨터 비전(Computer Vision) 기반의 다양한 객체 탐지(Object Detection) 모델과 행동 인식(Action Recognition) 모델을 통해 낙하 및 전도사고, 안전 수칙 미준수 인원, 폭력적인 행동을 보이는 인원을 복합적으로 판단하며, 객체 추적(Object Tracking), 관심 영역(Region of Interest), 객체 간의 거리 측정 알고리즘을 구현하여, 제한구역 접근, 침입, 배회, 안전 보호구 미착용 인원 그리고 화재 및 충돌사고 위험도를 측정한다. 해당 연구를 통한 자동화된 24시간 감시체계는 실시간 영상 데이터 분석 및 판단 처리 과정을 거친 후 각 장소에서 수집된 데이터를 관리자에게 신속히 전달하고 항만 내 통합관제센터에 접목함으로써 효율적인 관리 및 운영할 수 있게 하는 '지능형 인프라'를 구축할 수 있다. 이러한 체계는 곧 스마트 항만 시스템 도입에 이바지할 수 있을 것으로 기대된다.

Face Recognition Using a Neuro-Fuzzy Algorithm (뉴로-퍼지 알고리듬을 이용한 얼굴인식)

  • 이상영;함영국;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.1
    • /
    • pp.50-63
    • /
    • 1995
  • In this paper, we propose a face recognition method using a neuro-fuzzy algorithm. In the preprocessing step, we extract the face part from the background image by tracking face boundaries. Then based on the a priori knowledge of human faces we extract the features such as widths of eyes and mouth, and distances from eye to nose and nose to mouth. In the recognition step. We use a neuro-fuzzy algorithm that employs a fuzzy membership function and modified error backpropagation algorithm. The former absorbs the variation of feature values and the latter shows good learning efficiency. Computer simulation results with 20 persons show that the proposed method gives higher recognition rate than the conventional ones.

  • PDF

Fear and Surprise Facial Recognition Algorithm for Dangerous Situation Recognition

  • Kwak, NaeJoung;Ryu, SungPil;Hwang, IlYoung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.2
    • /
    • pp.51-55
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing dangerous situation. The proposed method firstly extracts the facial region using Harr-like technique from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, detects facial expression, and recognizes dangerous situation. The proposed method is evaluated for MUCT database image and web cam input. The proposed method produces good results of facial expression and discriminates dangerous situation well and the average recognition rate is 91.05%.

Face Detection and Recognition Using Ellipsodal Information and Wavelet Packet Analysis (타원형 정보와 웨이블렛 패킷 분석을 이용한 얼굴 검출 및 인식)

  • 정명호;김은태;박민용
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2327-2330
    • /
    • 2003
  • This paper deals with face detection and recognition using ellipsodal information and wavelet packet analysis. We proposed two methods. First, Face detection method uses general ellipsodal information of human face contour and we find eye position on wavelet transformed face images A novel method for recognition of views of human faces under roughly constant illumination is presented. Second, The proposed Face recognition scheme is based on the analysis of a wavelet packet decomposition of the face images. Each face image is first located and then, described by a subset of band filtered images containing wavelet coefficients. From these wavelet coefficients, which characterize the face texture, the Euclidian distance can be used in order to classify the face feature vectors into person classes. Experimental results are presented using images from the FERET and the MIT FACES databases. The efficiency of the proposed approach is analyzed according to the FERET evaluation procedure and by comparing our results with those obtained using the well-known Eigenfaces method. The proposed system achieved an rate of 97%(MIT data), 95.8%(FERET databace)

  • PDF

Iris Recognition using MPEG-7 Homogeneous Texture Descriptor (MPEG-7 Homogeneous Texture 기술자를 이용한 홍채인식)

  • 이종민;한일호;김희율
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.45-48
    • /
    • 2002
  • In this paper, we propose an iris recognition system using Homogeneous Texture descriptor of MPEG-7 standard. The texture of iris is generally used in iris recognition system. We segment the pupil with Hough transform and the boundary of iris with it's gray level difference between the white of the eye. To extract Homogeneous Texture descriptor, this iris image is transformed into polar coordinates. The extracted descriptor is then compared with the reference in DB. If their distance is larger than threshold, they are recognized as different iris. Test results will show that Homogeneous Texture descriptor can be a good measure for iris recognition system.

  • PDF

Design of 2D face recognition security planning to vulnerability (2차원 안면인식의 취약성 보안 방안 설계)

  • Lee, Jaeung;Jang, Jong-wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.243-245
    • /
    • 2017
  • In the face recognition technology, which has been studied a lot, the security of the face recognition technology is improved by receiving the depth data as a weak point for the 2D. In this paper, we expect the effect of cost reduction by enhancing the security of 2D by taking new features of eye flicker that each person possesses as new data information.

  • PDF

A Study on the Improvement of the Facial Image Recognition by Extraction of Tilted Angle (기울기 검출에 의한 얼굴영상의 인식의 개선에 관한 연구)

  • 이지범;이호준;고형화
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.7
    • /
    • pp.935-943
    • /
    • 1993
  • In this paper, robust recognition system for tilted facial image was developed. At first, standard facial image and lilted facial image are captured by CCTV camera and then transformed into binary image. The binary image is processed in order to obtain contour image by Laplacian edge operator. We trace and delete outermost edge line and use inner contour lines. We label four inner contour lines in order among the inner lines, and then we extract left and right eye with known distance relationship and with two eyes coordinates, and calculate slope information. At last, we rotate the tilted image in accordance with slope information and then calculate the ten distance features between element and element. In order to make the system invariant to image scale, we normalize these features with distance between left and righ eye. Experimental results show 88% recognition rate for twenty five face images when tilted degree is considered and 60% recognition rate when tilted degree is not considered.

  • PDF

Risk Situation Recognition Using Facial Expression Recognition of Fear and Surprise Expression (공포와 놀람 표정인식을 이용한 위험상황 인지)

  • Kwak, Nae-Jong;Song, Teuk Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.3
    • /
    • pp.523-528
    • /
    • 2015
  • This paper proposes an algorithm for risk situation recognition using facial expression. The proposed method recognitions the surprise and fear expression among human's various emotional expression for recognizing risk situation. The proposed method firstly extracts the facial region from input, detects eye region and lip region from the extracted face. And then, the method applies Uniform LBP to each region, discriminates facial expression, and recognizes risk situation. The proposed method is evaluated for Cohn-Kanade database image to recognize facial expression. The DB has 6 kinds of facial expressions of human being that are basic facial expressions such as smile, sadness, surprise, anger, disgust, and fear expression. The proposed method produces good results of facial expression and discriminates risk situation well.

Effects of the facial expression presenting types and facial areas on the emotional recognition (얼굴 표정의 제시 유형과 제시 영역에 따른 정서 인식 효과)

  • Lee, Jung-Hun;Park, Soo-Jin;Han, Kwang-Hee;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.113-125
    • /
    • 2007
  • The aim of the experimental studies described in this paper is to investigate the effects of the face/eye/mouth areas using dynamic facial expressions and static facial expressions on emotional recognition. Using seven-seconds-displays, experiment 1 for basic emotions and experiment 2 for complex emotions are executed. The results of two experiments supported that the effects of dynamic facial expressions are higher than static one on emotional recognition and indicated the higher emotional recognition effects of eye area on dynamic images than mouth area. These results suggest that dynamic properties should be considered in emotional study with facial expressions for not only basic emotions but also complex emotions. However, we should consider the properties of emotion because each emotion did not show the effects of dynamic image equally. Furthermore, this study let us know which facial area shows emotional states more correctly is according to the feature emotion.

  • PDF

Head Gesture Recognition using Facial Pose States and Automata Technique (얼굴의 포즈 상태와 오토마타 기법을 이용한 헤드 제스처 인식)

  • Oh, Seung-Taek;Jun, Byung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.12
    • /
    • pp.947-954
    • /
    • 2001
  • In this paper, we propose a method for the recognition of various head gestures with automata technique applied to the sequence of facial pose states. Facial regions as detected by using the optimum facial color of I-component in YIQ model and the difference of images adaptively selected. And eye regions are extracted by using Sobel operator, projection, and the geometric location of eyes Hierarchical feature analysis is used to classify facial states, and automata technique is applied to the sequence of facial pose states to recognize 13 gestures: Gaze Upward, Downward, Left ward, Rightward, Forward, Backward Left Wink Right Wink Left Double Wink, Left Double Wink , Right Double Wink Yes, and No As an experimental result with total 1,488 frames acquired from 8 persons, it shows 99.3% extraction rate for facial regions, 95.3% extraction rate for eye regions 94.1% recognition rate for facial states and finally 99.3% recognition rate for head gestures. .

  • PDF