• Title/Summary/Keyword: facial feature point

Search Result 60, Processing Time 0.029 seconds

A Flexible Feature Matching for Automatic face and Facial feature Points Detection (얼굴과 얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • 박호식;손형경;정연길;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.608-612
    • /
    • 2002
  • An automatic face and facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features md the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image spare by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the fare identification system.

  • PDF

Recognition of Facial Expressions of Animation Characters Using Dominant Colors and Feature Points (주색상과 특징점을 이용한 애니메이션 캐릭터의 표정인식)

  • Jang, Seok-Woo;Kim, Gye-Young;Na, Hyun-Suk
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.375-384
    • /
    • 2011
  • This paper suggests a method to recognize facial expressions of animation characters by means of dominant colors and feature points. The proposed method defines a simplified mesh model adequate for the animation character and detects its face and facial components by using dominant colors. It also extracts edge-based feature points for each facial component. It then classifies the feature points into corresponding AUs(action units) through neural network, and finally recognizes character facial expressions with the suggested AU specification. Experimental results show that the suggested method can recognize facial expressions of animation characters reliably.

Markerless Image-to-Patient Registration Using Stereo Vision : Comparison of Registration Accuracy by Feature Selection Method and Location of Stereo Bision System (스테레오 비전을 이용한 마커리스 정합 : 특징점 추출 방법과 스테레오 비전의 위치에 따른 정합 정확도 평가)

  • Joo, Subin;Mun, Joung-Hwan;Shin, Ki-Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.1
    • /
    • pp.118-125
    • /
    • 2016
  • This study evaluates the performance of image to patient registration algorithm by using stereo vision and CT image for facial region surgical navigation. For the process of image to patient registration, feature extraction and 3D coordinate calculation are conducted, and then 3D CT image to 3D coordinate registration is conducted. Of the five combinations that can be generated by using three facial feature extraction methods and three registration methods on stereo vision image, this study evaluates the one with the highest registration accuracy. In addition, image to patient registration accuracy was compared by changing the facial rotation angle. As a result of the experiment, it turned out that when the facial rotation angle is within 20 degrees, registration using Active Appearance Model and Pseudo Inverse Matching has the highest accuracy, and when the facial rotation angle is over 20 degrees, registration using Speeded Up Robust Features and Iterative Closest Point has the highest accuracy. These results indicate that, Active Appearance Model and Pseudo Inverse Matching methods should be used in order to reduce registration error when the facial rotation angle is within 20 degrees, and Speeded Up Robust Features and Iterative Closest Point methods should be used when the facial rotation angle is over 20 degrees.

Feature-Point Extraction by Dynamic Linking Model bas Wavelets and Fuzzy C-Means Clustering Algorithm (Gabor 웨이브렛과 FCM 군집화 알고리즘에 기반한 동적 연결모형에 의한 얼굴표정에서 특징점 추출)

  • Sin, Yeong Suk
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.10-10
    • /
    • 2003
  • This paper extracts the edge of main components of face with Gabor wavelets transformation in facial expression images. FCM(Fuzzy C-Means) clustering algorithm then extracts the representative feature points of low dimensionality from the edge extracted in neutral face. The feature-points of the neutral face is used as a template to extract the feature-points of facial expression images. To match point to Point feature points on an expression face against each feature point on a neutral face, it consists of two steps using a dynamic linking model, which are called the coarse mapping and the fine mapping. This paper presents an automatic extraction of feature-points by dynamic linking model based on Gabor wavelets and fuzzy C-means(FCM) algorithm. The result of this study was applied to extract features automatically in facial expression recognition based on dimension[1].

Real Time Face Detection and Recognition using Rectangular Feature based Classifier and Class Matching Algorithm (사각형 특징 기반 분류기와 클래스 매칭을 이용한 실시간 얼굴 검출 및 인식)

  • Kim, Jong-Min;Kang, Myung-A
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.1
    • /
    • pp.19-26
    • /
    • 2010
  • This paper proposes a classifier based on rectangular feature to detect face in real time. The goal is to realize a strong detection algorithm which satisfies both efficiency in calculation and detection performance. The proposed algorithm consists of the following three stages: Feature creation, classifier study and real time facial domain detection. Feature creation organizes a feature set with the proposed five rectangular features and calculates the feature values efficiently by using SAT (Summed-Area Tables). Classifier learning creates classifiers hierarchically by using the AdaBoost algorithm. In addition, it gets excellent detection performance by applying important face patterns repeatedly at the next level. Real time facial domain detection finds facial domains rapidly and efficiently through the classifier based on the rectangular feature that was created. Also, the recognition rate was improved by using the domain which detected a face domain as the input image and by using PCA and KNN algorithms and a Class to Class rather than the existing Point to Point technique.

Facial Expression Recognition Using SIFT Descriptor (SIFT 기술자를 이용한 얼굴 표정인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.89-94
    • /
    • 2016
  • This paper proposed a facial expression recognition approach using SIFT feature and SVM classifier. The SIFT was generally employed as feature descriptor at key-points in object recognition fields. However, this paper applied the SIFT descriptor as feature vector for facial expression recognition. In this paper, the facial feature was extracted by applying SIFT descriptor at each sub-block image without key-point detection procedure, and the facial expression recognition was performed using SVM classifier. The performance evaluation was carried out through comparison with binary pattern feature-based approaches such as LBP and LDP, and the CK facial expression database and the JAFFE facial expression database were used in the experiments. From the experimental results, the proposed method using SIFT descriptor showed performance improvements of 6.06% and 3.87% compared to previous approaches for CK database and JAFFE database, respectively.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

Reconstructing 3-D Facial Shape Based on SR Imagine

  • Hong, Yu-Jin;Kim, Jaewon;Kim, Ig-Jae
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.57-61
    • /
    • 2014
  • We present a robust 3D facial reconstruction method using a single image generated by face-specific super resolution technique. Based on the several consecutive frames with low resolution, we generate a single high resolution image and a three dimensional facial model based on it. To do this, we apply PME method to compute patch similarities for SR after two-phase warping according to facial attributes. Based on the SRI, we extract facial features automatically and reconstruct 3D facial model with basis which selected adaptively according to facial statistical data less than a few seconds. Thereby, we can provide the facial image of various points of view which cannot be given by a single point of view of a camera.

A Study on the Feature Point Extraction and Image Synthesis in the 3-D Model Based Image Transmission System (3차원 모델 기반 영상전송 시스템에서의 특징점 추출과 영상합성 연구)

  • 배문관;김동호;정성환;김남철;배건성
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.767-778
    • /
    • 1992
  • Is discussed. A method to extract feature points and to synthesize human facial images In 3-Dmodel-based ceding system, faciai feature points are extracted automatically using some image processing techniques and the known knowledge for human face. A wire frame model matched to human face Is transformed according to the motion of point using the extracted feature points. The synthesized Image Is produced by mapping the texture of initial front view Image onto the trarnsformed wire frame. Experinent results show that the synthesitzed image appears with little unnaturalness.

  • PDF

Face Detection Using Skin Color and Geometrical Constraints of Facial Features (살색과 얼굴 특징들의 기하학적 제한을 이용한 얼굴 위치 찾기)

  • Cho, Kyung-Min;Hong, Ki-Sang
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.12
    • /
    • pp.107-119
    • /
    • 1999
  • There is no authentic solution in a face detection problem though it is an important part of pattern recognition and has many diverse application fields. The reason is that there are many unpredictable deformations due to facial expressions, view point, rotation, scale, gender, age, etc. To overcome these problems, we propose an algorithm based on feature-based method, which is well known to be robust to these deformations. We detect a face by calculating a similarity between the formation of real face feature and candidate feature formation which consists of eyebrow, eye, nose, and mouth. In this paper, we use a steerable filter instead of general derivative edge detector in order to get more accurate feature components. We applied deformable template to verify the detected face, which overcome the weak point of feature-based method. Considering the low detection rate because of face detection method using whole input images, we design an adaptive skin-color filter which can be applicable to a diverse skin color, minimizing target area and processing time.

  • PDF