• Title/Summary/Keyword: Facial Expression Template

Search Result 17, Processing Time 0.03 seconds

Robust Facial Expression-Recognition Against Various Expression Intensity (표정 강도에 강건한 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.395-402
    • /
    • 2009
  • This paper proposes an approach of a novel facial expression recognition to deal with different intensities to improve a performance of a facial expression recognition. Various expressions and intensities of each person make an affect to decrease the performance of the facial expression recognition. The effect of different intensities of facial expression has been seldom focused on. In this paper, a face expression template and an expression-intensity distribution model are introduced to recognize different facial expression intensities. These techniques, facial expression template and expression-intensity distribution model contribute to improve the performance of facial expression recognition by describing how the shift between multiple interest points in the vicinity of facial parts and facial parts varies for different facial expressions and its intensities. The proposed method has the distinct advantage that facial expression recognition with different intensities can be very easily performed with a simple calibration on video sequences as well as still images. Experimental results show a robustness that the method can recognize facial expression with weak intensities.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

An Improved LBP-based Facial Expression Recognition through Optimization of Block Weights (블록가중치의 최적화를 통해 개선된 LBP기반의 표정인식)

  • Park, Seong-Chun;Koo, Ja-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.11
    • /
    • pp.73-79
    • /
    • 2009
  • In this paper, a method is proposed that enhances the performance of the facial expression recognition using template matching of Local Binary Pattern(LBP) histogram. In this method, the face image is segmented into blocks, and the LBP histogram is constructed to be used as the feature of the block. Block dissimilarity is calculated between a block of input image and the corresponding block of the model image. Image dissimilarity is defined as the weighted sum of the block dissimilarities. In conventional methods, the block weights are assigned by intuition. In this paper a new method is proposed that optimizes the weights from training samples. An experiment shows the recognition rate is enhanced by the proposed method.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Recognizing Human Facial Expressions and Gesture from Image Sequence (연속 영상에서의 얼굴표정 및 제스처 인식)

  • 한영환;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.419-425
    • /
    • 1999
  • In this paper, we present an algorithm of real time facial expression and gesture recognition for image sequence on the gray level. A mixture algorithm of a template matching and knowledge based geometrical consideration of a face were adapted to locate the face area in input image. And optical flow method applied on the area to recognize facial expressions. Also, we suggest hand area detection algorithm form a background image by analyzing entropy in an image. With modified hand area detection algorithm, it was possible to recognize hand gestures from it. As a results, the experiments showed that the suggested algorithm was good at recognizing one's facial expression and hand gesture by detecting a dominant motion area on images without getting any limits from the background image.

  • PDF

Design and Implementation of a Real-Time Emotional Avatar (실시간 감정 표현 아바타의 설계 및 구현)

  • Jung, Il-Hong;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.7 no.4
    • /
    • pp.235-243
    • /
    • 2006
  • This paper presents the development of certain efficient method for expressing the emotion of an avatar based on the facial expression recognition. This new method is not changing a facial expression of the avatar manually. It can be changing a real time facial expression of the avatar based on recognition of a facial pattern which can be captured by a web cam. It provides a tool for recognizing some part of images captured by the web cam. Because of using the model-based approach, this tool recognizes the images faster than other approaches such as the template-based or the network-based. It is extracting the shape of user's lip after detecting the information of eyes by using the model-based approach. By using changes of lip's patterns, we define 6 patterns of avatar's facial expression by using 13 standard lip's patterns. Avatar changes a facial expression fast by using the pre-defined avatar with corresponding expression.

  • PDF

Feature-Point Extraction by Dynamic Linking Model bas Wavelets and Fuzzy C-Means Clustering Algorithm (Gabor 웨이브렛과 FCM 군집화 알고리즘에 기반한 동적 연결모형에 의한 얼굴표정에서 특징점 추출)

  • 신영숙
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.11-16
    • /
    • 2003
  • This Paper extracts the edge of main components of face with Gator wavelets transformation in facial expression images. FCM(Fuzzy C-Means) clustering algorithm then extracts the representative feature points of low dimensionality from the edge extracted in neutral face. The feature-points of the neutral face is used as a template to extract the feature-points of facial expression images. To match point to Point feature points on an expression face against each feature point on a neutral face, it consists of two steps using a dynamic linking model, which are called the coarse mapping and the fine mapping. This paper presents an automatic extraction of feature-points by dynamic linking model based on Gabor wavelets and fuzzy C-means(FCM) algorithm. The result of this study was applied to extract features automatically in facial expression recognition based on dimension[1].

  • PDF