• Title/Summary/Keyword: Facial Feature Extraction

Search Result 160, Processing Time 0.024 seconds

Implementation of Drowsiness Driving Warning System based on Improved Eyes Detection and Pupil Tracking Using Facial Feature Information (얼굴 특징 정보를 이용한 향상된 눈동자 추적을 통한 졸음운전 경보 시스템 구현)

  • Jeong, Do Yeong;Hong, KiCheon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.2
    • /
    • pp.167-176
    • /
    • 2009
  • In this paper, a system that detects driver's drowsiness has been implemented based on the automatic extraction and the tracking of pupils. The research also focuses on the compensation of illumination and reduction of background noises that naturally exist in the driving condition. The system, that is based on the principle of Haar-like feature, automatically collects data from areas of driver's face and eyes among the complex background. Then, it makes decision of driver's drowsiness by using recognition of characteristics of pupils area, detection of pupils, and their movements. The implemented system has been evaluated and verified the practical uses for the prevention of driver's drowsiness.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.

Facial Expression Analysis System based on Image Feature Extraction (이미지 특징점 추출 기반 얼굴 표정 분석 시스템)

  • Jeon, Jin-Hwan;Song, Jeo;Lee, Sang-Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.07a
    • /
    • pp.293-294
    • /
    • 2016
  • 스마트폰, 블랙박스, CCTV 등을 통해 다양하고 방대한 영상 데이터가 발생하고 있다. 그중에서 사람의 얼굴 영상을 통해 개인을 인식 및 식별하고 감정 상태를 분석하려는 다양한 연구가 진행되고 있다. 본 논문에서는 디지털영상처리 분야에서 널리 사용되고 있는 SIFT알고리즘을 이용하여, 얼굴영상에 대한 특징점을 추출하고 이를 기반으로 성별, 나이 및 기초적인 감정 상태를 분류할 수 있는 시스템을 제안한다.

  • PDF

Communication-system using the BCI (뇌-컴퓨터 인터페이스를 이용한 의사전달기)

  • 조한범;양은주;음태완;김응수
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.113-116
    • /
    • 2003
  • A person does communication between each other using language. But, In the case of disabled person, call not communicate own idea to use writing and gesture. We embodied communication system using the ERG so that disabled Person can do communication. After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal. and then did so that can select character and communicate idea.

  • PDF

A Study on A Biometric Bits Extraction Method of A Cancelable face Template based on A Helper Data (보조정보에 기반한 가변 얼굴템플릿의 이진화 방법의 연구)

  • Lee, Hyung-Gu;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.83-90
    • /
    • 2010
  • Cancelable biometrics is a robust and secure biometric recognition method using revocable biometric template in order to prevent possible compromisation of the original biometric data. In this paper, we present a new cancelable bits extraction method for the facial data. We use our previous cancelable feature template for the bits extraction. The adopted cancelable template is generated from two different original face feature vectors extracted from two different appearance-based approaches. Each element of feature vectors is re-ordered, and the scrambled features are added. With the added feature, biometric bits string is extracted using helper data based method. In this technique, helper data is generated using statistical property of the added feature vector, which can be easily replaced with straightforward revocation. Because, the helper data only utilizes partial information of the added feature, our proposed method is a more secure method than our previous one. The proposed method utilizes the helper data to reduce feature variance within the same individual and increase the distinctiveness of bit strings of different individuals for good recognition performance. For a security evaluation of our proposed method, a scenario in which the system is compromised by an adversary is also considered. In our experiments, we analyze the proposed method with respect to performance and security using the extended YALEB face database

Face Recognition Based on Facial Landmark Feature Descriptor in Unconstrained Environments (비제약적 환경에서 얼굴 주요위치 특징 서술자 기반의 얼굴인식)

  • Kim, Daeok;Hong, Jongkwang;Byun, Hyeran
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.666-673
    • /
    • 2014
  • This paper proposes a scalable face recognition method for unconstrained face databases, and shows a simple experimental result. Existing face recognition research usually has focused on improving the recognition rate in a constrained environment where illumination, face alignment, facial expression, and background is controlled. Therefore, it cannot be applied in unconstrained face databases. The proposed system is face feature extraction algorithm for unconstrained face recognition. First of all, we extract the area that represent the important features(landmarks) in the face, like the eyes, nose, and mouth. Each landmark is represented by a high-dimensional LBP(Local Binary Pattern) histogram feature vector. The multi-scale LBP histogram vector corresponding to a single landmark, becomes a low-dimensional face feature vector through the feature reduction process, PCA(Principal Component Analysis) and LDA(Linear Discriminant Analysis). We use the Rank acquisition method and Precision at k(p@k) performance verification method for verifying the face recognition performance of the low-dimensional face feature by the proposed algorithm. To generate the experimental results of face recognition we used the FERET, LFW and PubFig83 database. The face recognition system using the proposed algorithm showed a better classification performance over the existing methods.

A Study on the Improvement of Skin Loss Area in Skin Color Extraction for Face Detection (얼굴 검출을 위한 피부색 추출 과정에서 피부색 손실 영역 개선에 관한 연구)

  • Kim, Dong In;Lee, Gang Seong;Han, Kun Hee;Lee, Sang Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.5
    • /
    • pp.1-8
    • /
    • 2019
  • In this paper, we propose an improved facial skin color extraction method to solve the problem that facial surface is lost due to shadow or illumination in skin color extraction process and skin color extraction is not possible. In the conventional HSV method, when facial surface is brightly illuminated by light, the skin color component is lost in the skin color extraction process, so that a loss area appears on the face surface. In order to solve these problems, we extract the skin color, determine the elements in the H channel value range of the skin color in the HSV color space among the lost skin elements, and combine the coordinates of the lost part with the coordinates of the original image, To minimize the number of In the face detection process, the face was detected using the LBP Cascade Classifier, which represents texture feature information in the extracted skin color image. Experimental results show that the proposed method improves the detection rate and accuracy by 5.8% and 9.6%, respectively, compared with conventional RGB and HSV skin color extraction and face detection using the LBP cascade classifier method.

Face and Its Components Extraction of Animation Characters Based on Dominant Colors (주색상 기반의 애니메이션 캐릭터 얼굴과 구성요소 검출)

  • Jang, Seok-Woo;Shin, Hyun-Min;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.93-100
    • /
    • 2011
  • The necessity of research on extracting information of face and facial components in animation characters have been increasing since they can effectively express the emotion and personality of characters. In this paper, we introduce a method to extract face and facial components of animation characters by defining a mesh model adequate for characters and by using dominant colors. The suggested algorithm first generates a mesh model for animation characters, and extracts dominant colors for face and facial components by adapting the mesh model to the face of a model character. Then, using the dominant colors, we extract candidate areas of the face and facial components from input images and verify if the extracted areas are real face or facial components by means of color similarity measure. The experimental results show that our method can reliably detect face and facial components of animation characters.

Emotion Recognition Based on Facial Expression by using Context-Sensitive Bayesian Classifier (상황에 민감한 베이지안 분류기를 이용한 얼굴 표정 기반의 감정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.653-662
    • /
    • 2006
  • In ubiquitous computing that is to build computing environments to provide proper services according to user's context, human being's emotion recognition based on facial expression is used as essential means of HCI in order to make man-machine interaction more efficient and to do user's context-awareness. This paper addresses a problem of rigidly basic emotion recognition in context-sensitive facial expressions through a new Bayesian classifier. The task for emotion recognition of facial expressions consists of two steps, where the extraction step of facial feature is based on a color-histogram method and the classification step employs a new Bayesian teaming algorithm in performing efficient training and test. New context-sensitive Bayesian learning algorithm of EADF(Extended Assumed-Density Filtering) is proposed to recognize more exact emotions as it utilizes different classifier complexities for different contexts. Experimental results show an expression classification accuracy of over 91% on the test database and achieve the error rate of 10.6% by modeling facial expression as hidden context.