• Title/Summary/Keyword: Facial Feature Extraction

Search Result 157, Processing Time 0.03 seconds

Facial Expression Analysis System based on Image Feature Extraction (이미지 특징점 추출 기반 얼굴 표정 분석 시스템)

  • Jeon, Jin-Hwan;Song, Jeo;Lee, Sang-Moon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2016.07a
    • /
    • pp.293-294
    • /
    • 2016
  • 스마트폰, 블랙박스, CCTV 등을 통해 다양하고 방대한 영상 데이터가 발생하고 있다. 그중에서 사람의 얼굴 영상을 통해 개인을 인식 및 식별하고 감정 상태를 분석하려는 다양한 연구가 진행되고 있다. 본 논문에서는 디지털영상처리 분야에서 널리 사용되고 있는 SIFT알고리즘을 이용하여, 얼굴영상에 대한 특징점을 추출하고 이를 기반으로 성별, 나이 및 기초적인 감정 상태를 분류할 수 있는 시스템을 제안한다.

  • PDF

Communication-system using the BCI (뇌-컴퓨터 인터페이스를 이용한 의사전달기)

  • 조한범;양은주;음태완;김응수
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.05a
    • /
    • pp.113-116
    • /
    • 2003
  • A person does communication between each other using language. But, In the case of disabled person, call not communicate own idea to use writing and gesture. We embodied communication system using the ERG so that disabled Person can do communication. After feature extraction of the EEG included facial muscle, it is converted the facial muscle into control signal. and then did so that can select character and communicate idea.

  • PDF

A Study on A Biometric Bits Extraction Method of A Cancelable face Template based on A Helper Data (보조정보에 기반한 가변 얼굴템플릿의 이진화 방법의 연구)

  • Lee, Hyung-Gu;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.83-90
    • /
    • 2010
  • Cancelable biometrics is a robust and secure biometric recognition method using revocable biometric template in order to prevent possible compromisation of the original biometric data. In this paper, we present a new cancelable bits extraction method for the facial data. We use our previous cancelable feature template for the bits extraction. The adopted cancelable template is generated from two different original face feature vectors extracted from two different appearance-based approaches. Each element of feature vectors is re-ordered, and the scrambled features are added. With the added feature, biometric bits string is extracted using helper data based method. In this technique, helper data is generated using statistical property of the added feature vector, which can be easily replaced with straightforward revocation. Because, the helper data only utilizes partial information of the added feature, our proposed method is a more secure method than our previous one. The proposed method utilizes the helper data to reduce feature variance within the same individual and increase the distinctiveness of bit strings of different individuals for good recognition performance. For a security evaluation of our proposed method, a scenario in which the system is compromised by an adversary is also considered. In our experiments, we analyze the proposed method with respect to performance and security using the extended YALEB face database

Face Recognition Based on Facial Landmark Feature Descriptor in Unconstrained Environments (비제약적 환경에서 얼굴 주요위치 특징 서술자 기반의 얼굴인식)

  • Kim, Daeok;Hong, Jongkwang;Byun, Hyeran
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.666-673
    • /
    • 2014
  • This paper proposes a scalable face recognition method for unconstrained face databases, and shows a simple experimental result. Existing face recognition research usually has focused on improving the recognition rate in a constrained environment where illumination, face alignment, facial expression, and background is controlled. Therefore, it cannot be applied in unconstrained face databases. The proposed system is face feature extraction algorithm for unconstrained face recognition. First of all, we extract the area that represent the important features(landmarks) in the face, like the eyes, nose, and mouth. Each landmark is represented by a high-dimensional LBP(Local Binary Pattern) histogram feature vector. The multi-scale LBP histogram vector corresponding to a single landmark, becomes a low-dimensional face feature vector through the feature reduction process, PCA(Principal Component Analysis) and LDA(Linear Discriminant Analysis). We use the Rank acquisition method and Precision at k(p@k) performance verification method for verifying the face recognition performance of the low-dimensional face feature by the proposed algorithm. To generate the experimental results of face recognition we used the FERET, LFW and PubFig83 database. The face recognition system using the proposed algorithm showed a better classification performance over the existing methods.

A Study on the Improvement of Skin Loss Area in Skin Color Extraction for Face Detection (얼굴 검출을 위한 피부색 추출 과정에서 피부색 손실 영역 개선에 관한 연구)

  • Kim, Dong In;Lee, Gang Seong;Han, Kun Hee;Lee, Sang Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.5
    • /
    • pp.1-8
    • /
    • 2019
  • In this paper, we propose an improved facial skin color extraction method to solve the problem that facial surface is lost due to shadow or illumination in skin color extraction process and skin color extraction is not possible. In the conventional HSV method, when facial surface is brightly illuminated by light, the skin color component is lost in the skin color extraction process, so that a loss area appears on the face surface. In order to solve these problems, we extract the skin color, determine the elements in the H channel value range of the skin color in the HSV color space among the lost skin elements, and combine the coordinates of the lost part with the coordinates of the original image, To minimize the number of In the face detection process, the face was detected using the LBP Cascade Classifier, which represents texture feature information in the extracted skin color image. Experimental results show that the proposed method improves the detection rate and accuracy by 5.8% and 9.6%, respectively, compared with conventional RGB and HSV skin color extraction and face detection using the LBP cascade classifier method.

Face and Its Components Extraction of Animation Characters Based on Dominant Colors (주색상 기반의 애니메이션 캐릭터 얼굴과 구성요소 검출)

  • Jang, Seok-Woo;Shin, Hyun-Min;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.93-100
    • /
    • 2011
  • The necessity of research on extracting information of face and facial components in animation characters have been increasing since they can effectively express the emotion and personality of characters. In this paper, we introduce a method to extract face and facial components of animation characters by defining a mesh model adequate for characters and by using dominant colors. The suggested algorithm first generates a mesh model for animation characters, and extracts dominant colors for face and facial components by adapting the mesh model to the face of a model character. Then, using the dominant colors, we extract candidate areas of the face and facial components from input images and verify if the extracted areas are real face or facial components by means of color similarity measure. The experimental results show that our method can reliably detect face and facial components of animation characters.

Emotion Recognition Based on Facial Expression by using Context-Sensitive Bayesian Classifier (상황에 민감한 베이지안 분류기를 이용한 얼굴 표정 기반의 감정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.13B no.7 s.110
    • /
    • pp.653-662
    • /
    • 2006
  • In ubiquitous computing that is to build computing environments to provide proper services according to user's context, human being's emotion recognition based on facial expression is used as essential means of HCI in order to make man-machine interaction more efficient and to do user's context-awareness. This paper addresses a problem of rigidly basic emotion recognition in context-sensitive facial expressions through a new Bayesian classifier. The task for emotion recognition of facial expressions consists of two steps, where the extraction step of facial feature is based on a color-histogram method and the classification step employs a new Bayesian teaming algorithm in performing efficient training and test. New context-sensitive Bayesian learning algorithm of EADF(Extended Assumed-Density Filtering) is proposed to recognize more exact emotions as it utilizes different classifier complexities for different contexts. Experimental results show an expression classification accuracy of over 91% on the test database and achieve the error rate of 10.6% by modeling facial expression as hidden context.

Local Context based Feature Extraction for Efficient Face Detection (효율적인 얼굴 검출을 위한 지역적 켄텍스트 기반의 특징 추출)

  • Rhee, Phill-Kyu;Xu, Yong Zhe;Shin, Hak-Chul;Shen, Yan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.1
    • /
    • pp.185-191
    • /
    • 2011
  • Recently, the surveillance system is highly being attention. Various Technologies as detecting object from image than determining and recognizing if the object are person are universally being used. Therefore, In this paper shows detecting on this kind of object and local context based facial feather detection algorithm is being advocated. Detect using Gabor Bunch in the same time Bayesian detection method for revision to find feather point is being described. The entire system to search for object area from image, context-based face detection, feature extraction methods applied to improve the performance of the system.

An image analysis system Design using Arduino sensor and feature point extraction algorithm to prevent intrusion

  • LIM, Myung-Jae;JUNG, Dong-Kun;KWON, Young-Man
    • Korean Journal of Artificial Intelligence
    • /
    • v.9 no.2
    • /
    • pp.23-28
    • /
    • 2021
  • In this paper, we studied a system that can efficiently build security management for single-person households using Arduino, ESP32-CAM and PIR sensors, and proposed an Android app with an internet connection. The ESP32-CAM is an Arduino compatible board that supports both Wi-Fi, Bluetooth, and cameras using an ESP32-based processor. The PCB on-board antenna may be used independently, and the sensitivity may be expanded by separately connecting the external antenna. This system has implemented an Arduino-based Unauthorized intrusion system that can significantly help prevent crimes in single-person households using the combination of PIR sensors, Arduino devices, and smartphones. unauthorized intrusion system, showing the connection between Arduino Uno and ESP32-CAM and with smartphone applications. Recently, if daily quarantine is underway around us and it is necessary to verify the identity of visitors, it is expected that it will help maintain a safety net if this system is applied for the purpose of facial recognition and restricting some access. This technology is widely used to verify that the characters in the two images entered into the system are the same or to determine who the characters in the images are most similar to among those previously stored in the internal database. There is an advantage that it may be implemented in a low-power, low-cost environment through image recognition, comparison, feature point extraction, and comparison.

A Design on Face Recognition System Based on pRBFNNs by Obtaining Real Time Image (실시간 이미지 획득을 통한 pRBFNNs 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Seok, Jin-Wook;Kim, Ki-Sang;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1150-1158
    • /
    • 2010
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problem. First, in preprocessing part, we use a CCD camera to obtain a picture frame in real-time. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. We use an AdaBoost algorithm proposed by Viola and Jones, which is exploited for the detection of facial image area between face and non-facial image area. As the feature extraction algorithm, PCA method is used. In this study, the PCA method, which is a feature extraction algorithm, is used to carry out the dimension reduction of facial image area formed by high-dimensional information. Secondly, we use pRBFNNs to identify the ID by recognizing unique pattern of each person. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. Coefficients of connection weight identified with back-propagation using gradient descent method. The output of pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of the Particle Swarm Optimization. The proposed pRBFNNs are applied to real-time face recognition system and then demonstrated from the viewpoint of output performance and recognition rate.