• Title/Summary/Keyword: Facial Feature Extraction

Search Result 157, Processing Time 0.025 seconds

Face Recognition using Extended Center-Symmetric Pattern and 2D-PCA (Extended Center-Symmetric Pattern과 2D-PCA를 이용한 얼굴인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.2
    • /
    • pp.111-119
    • /
    • 2013
  • Face recognition has recently become one of the most popular research areas in the fields of computer vision, machine learning, and pattern recognition because it spans numerous applications, such as access control, surveillance, security, credit-card verification, and criminal identification. In this paper, we propose a simple descriptor called an ECSP(Extended Center-Symmetric Pattern) for illumination-robust face recognition. The ECSP operator encodes the texture information of a local face region by emphasizing diagonal components of a previous CS-LBP(Center-Symmetric Local Binary Pattern). Here, the diagonal components are emphasized because facial textures along the diagonal direction contain much more information than those of other directions. The facial texture information of the ECSP operator is then used as the input image of an image covariance-based feature extraction algorithm such as 2D-PCA(Two-Dimensional Principal Component Analysis). Performance evaluation of the proposed approach was carried out using various binary pattern operators and recognition algorithms on the Yale B database. The experimental results demonstrated that the proposed approach achieved better recognition accuracy than other approaches, and we confirmed that the proposed approach is effective against illumination variation.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Micro-Expression Recognition Base on Optical Flow Features and Improved MobileNetV2

  • Xu, Wei;Zheng, Hao;Yang, Zhongxue;Yang, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.1981-1995
    • /
    • 2021
  • When a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance.

Frontal Face Region Extraction & Features Extraction for Ocular Inspection (망진을 위한 정면 얼굴 영역 및 특징 요소 추출)

  • Cho Dong-Uk;Kim Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.585-592
    • /
    • 2005
  • One of the most important things in the researches on diseases is to attach more importance to prevention of a disease and preservation of health than to treatment of a disease, also to foods rather than to medicines. In this context, the most significant concern in examining a patient is to find the presence of disease, and, if any, to diaguose the type of disease, after which a pharmacotherapy is followed. In this paper, various diagnosis methods of Oriental medicines are discussed. And ocular inspection, the most important method among the 4 disease diagnoses of Oriental medicines, is studied. Observing a person's shape and color has been the major method for ocular inspection, which usually has been dependent upon doctor's intuition as of these days. We are developing an automatic system which provides objective basic data for ocular inspection. As the first stage, we applied the signal processing techniques to automatic feature extraction of faces for ocular inspection. Firstly, facial regions are extracted from the point of frontal view, which was followed by extraction of their features. The experiment applied to 20 persons showed that frontal face regions are perfectly extracted, as well as their features, such as eyes, eyebrows, noses and mouths. Future work will seek to address the issues of morphological operation for a few unfinished extraction results, such as combined hair and eyebrows.

Synthesis of Realistic Facial Expression using a Nonlinear Model for Skin Color Change (비선형 피부색 변화 모델을 이용한 실감적인 표정 합성)

  • Lee Jeong-Ho;Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-75
    • /
    • 2006
  • Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures should also be deformed to achieve more realistic expression. The existing methods such as the expression ratio image have drawbacks, in that detailed changes of complexion by lighting can not be generated properly. In this paper, we propose a nonlinear model for skin color change and a model-based synthesis method for facial expression that can apply realistic expression details under different lighting conditions. The proposed method is composed of the following three steps; automatic extraction of facial features using active appearance model and geometric deformation of expression using warping, generation of facial expression using a model for nonlinear skin color change, and synthesis of original face with generated expression using a blending ratio that is computed by the Euclidean distance transform. Experimental results show that the proposed method generate realistic facial expressions under various lighting conditions.

A Study on A Biometric Bits Extraction Method Using Subpattern-based PCA and A Helper Data (영역기반 주성분 분석 방법과 보조정보를 이용한 얼굴정보의 비트열 변환 방법)

  • Lee, Hyung-Gu;Jung, Ho-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.183-191
    • /
    • 2010
  • Unique and invariant biometric characteristics have been used for secure user authentication. Storing original biometric data is not acceptable due to privacy and security concerns of biometric technology. In order to enhance the security of the biometric data, the cancelable biometrics was introduced. Using revocable and non-invertible transformation, the cancelable biometrics can provide a way of more secure biometric authentication. In this paper, we present a new cancelable bits extraction method for the facial data. For the feature extraction, the Subpattern-based Principle Component Analysis (PCA) is adopted. The Subpattern-based PCA divides a whole image into a set of partitioned subpatterns and extracts principle components from each subpattern area. The feature extracted by using Subpattern-based PCA is discretized with a helper data based method. The elements of the obtained bits are evaluated and ordered according to a measure based on the fisher criterion. Finally, the most discriminative bits are chosen as the biometric bits string and used for authentication of each identity. Even if the generated bits string is compromised, new bits string can be generated simply by changing the helper data. Because, the helper data utilizes partial information of the feature, the proposed method does not reveal privacy sensitive biometric information of the user. For a security evaluation of the proposed method, a scenario in which the helper is compromised by an adversary is also considered.

Locating and Extracing the Mouth in Human Face Images (얼굴 이미지에서 입 영역 분할)

  • Choe, Jeong-Il;Kim, Su-Hwan;Lee, Pil-Gyu
    • Korean Journal of Cognitive Science
    • /
    • v.8 no.4
    • /
    • pp.55-62
    • /
    • 1997
  • We proposed a method for locating of mouth using deformable templates, described by a parameterized template. An energy function is defined which links, edges, peaks, valleys in image intensity to corresponding properties of the template. The template deforms itself by altering its parameter values to minimize the energy function. The minimized energy function's parameter values can be used as descriptors for the feature. We propose a method for locating mouth fast, accurately by limiting a range of parameters' value and getting initial value of parameters' by preprocessing.

  • PDF

Face region detection algorithm of natural-image (자연 영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.1
    • /
    • pp.55-60
    • /
    • 2014
  • In this paper, we proposed a method for face region extraction by skin-color hue, saturation and facial feature extraction in natural images. The proposed algorithm is composed of lighting correction and face detection process. In the lighting correction step, performing correction function for a lighting change. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. Eye detection using C element in the CMY color model and mouth detection using Q element in the YIQ color model for extracted candidate areas. Face area detected based on human face knowledge for extracted candidate areas. When an experiment was conducted with 10 natural images of face as input images, the method showed a face detection rate of 100%.

Texture Mapping and 3D Face Modeling using Two Views of 2D Face Images (2장의 2차원 얼굴영상을 이용한 텍스쳐 생성과 자동적인 3차원 얼굴모델링)

  • Weon, Sun-Hee;Kim, Gye-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.705-709
    • /
    • 2009
  • In this paper, we propose 3d face modeling using two orthogonal views of 2D face images and automatically facial feature extraction. Th proposed technique consists of 2 parts, personalization of 3d face model and texture mapping.

Face Detection and Facial Feature Extraction for Person Identification (신원확인을 위한 얼굴 영역 탐지 및 얼굴 구성 요소 추출)

  • 이선화;차의영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.517-519
    • /
    • 2001
  • 본 논문에서는 신원 확인 시스템을 위한 얼굴 영역 탐지 및 얼굴 구성 요소들을 추출하는 방법을 제안한다. 이 방법은 신원 확인을 위해 사용자가 시스템을 조작할 때, 움직임이 발생한다는 점과 눈 영역이 주위 영역에 비하여 뚜렷하게 어두운 화소들로 구성되어 있다는 점에 착안하였다. CCD 카메라로부터 입력되는 동영상에서 차영상 기법을 이용하여 얼굴 영역을 탐지하고, 탐지된 얼굴 영역 내에서 가장 안정적인 검출 결과를 보이는 눈 영역을 추출한다. 그리고 추출된 두 눈의 위치를 이용하여 전체 얼굴의 기울기를 보정한 수, 제안하는 가변 Ratio Template을 이용하여 검출된 얼굴영역을 검증하며 코, 입과 같은 다른 얼굴 구성 요소들을 추출한다. 이 방법은 명암의 변화에 따라 유동적인 결과를 산출해내는 이진화 과정을 거치지 않으므로 국부적인 조명이 밝기 변화나 얼굴의 기울기 변화와 같은 얼굴 인식의 제약점에 강인한 특징을 가진다.

  • PDF