• Title/Summary/Keyword: Active Shape Models(ASM)

Search Result 9, Processing Time 0.027 seconds

Facial Expression Recognition with Instance-based Learning Based on Regional-Variation Characteristics Using Models-based Feature Extraction (모델기반 특징추출을 이용한 지역변화 특성에 따른 개체기반 표정인식)

  • Park, Mi-Ae;Ko, Jae-Pil
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1465-1473
    • /
    • 2006
  • In this paper, we present an approach for facial expression recognition using Active Shape Models(ASM) and a state-based model in image sequences. Given an image frame, we use ASM to obtain the shape parameter vector of the model while we locate facial feature points. Then, we can obtain the shape parameter vector set for all the frames of an image sequence. This vector set is converted into a state vector which is one of the three states by the state-based model. In the classification step, we use the k-NN with the proposed similarity measure that is motivated on the observation that the variation-regions of an expression sequence are different from those of other expression sequences. In the experiment with the public database KCFD, we demonstrate that the proposed measure slightly outperforms the binary measure in which the recognition performance of the k-NN with the proposed measure and the existing binary measure show 89.1% and 86.2% respectively when k is 1.

  • PDF

Improvement of Active Shape Model for Detecting Face Features in iOS Platform (iOS 플랫폼에서 Active Shape Model 개선을 통한 얼굴 특징 검출)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.2
    • /
    • pp.61-65
    • /
    • 2016
  • Facial feature detection is a fundamental function in the field of computer vision such as security, bio-metrics, 3D modeling, and face recognition. There are many algorithms for the function, active shape model is one of the most popular local texture models. This paper addresses issues related to face detection, and implements an efficient extraction algorithm for extracting the facial feature points to use on iOS platform. In this paper, we extend the original ASM algorithm to improve its performance by four modifications. First, to detect a face and to initialize the shape model, we apply a face detection API provided from iOS CoreImage framework. Second, we construct a weighted local structure model for landmarks to utilize the edge points of the face contour. Third, we build a modified model definition and fitting more landmarks than the classical ASM. And last, we extend and build two-dimensional profile model for detecting faces within input images. The proposed algorithm is evaluated on experimental test set containing over 500 face images, and found to successfully extract facial feature points, clearly outperforming the original ASM.

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Development of Tongue Diagnosis System Using ASM and SVM (ASM과 SVM을 이용한 설진 시스템 개발)

  • Park, Jin-Woong;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.4
    • /
    • pp.45-55
    • /
    • 2013
  • In this study, we propose a tongue diagnosis system which detects the tongue from face image and divides the tongue area into six areas, and finally generates tongue fur ratio of each area. To detect the tongue area from face image, we use ASM as one of the active shape models. Detected tongue area is divided into six areas and the distribution of tongue coating of six areas is examined by SVM. For SVM, we use a 3-dimensional vector calculated by PCA from a 12-dimensional vector consisting of RGB, HSV, Lab, and Luv. As a result, we stably detected the tongue area using ASM. Furthermore, we recognized that PCA and SVM helped to raise the ratio of tongue coating detection.

Video-based Facial Emotion Recognition using Active Shape Models and Statistical Pattern Recognizers (Active Shape Model과 통계적 패턴인식기를 이용한 얼굴 영상 기반 감정인식)

  • Jang, Gil-Jin;Jo, Ahra;Park, Jeong-Sik;Seo, Yong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.139-146
    • /
    • 2014
  • This paper proposes an efficient method for automatically distinguishing various facial expressions. To recognize the emotions from facial expressions, the facial images are obtained by digital cameras, and a number of feature points were extracted. The extracted feature points are then transformed to 49-dimensional feature vectors which are robust to scale and translational variations, and the facial emotions are recognized by statistical pattern classifiers such Naive Bayes, MLP (multi-layer perceptron), and SVM (support vector machine). Based on the experimental results with 5-fold cross validation, SVM was the best among the classifiers, whose performance was obtained by 50.8% for 6 emotion classification, and 78.0% for 3 emotions.

Lung Segmentation Considering Global and Local Properties in Chest X-ray Images (흉부 X선 영상에서의 전역 및 지역 특성을 고려한 폐 영역 분할 연구)

  • Jeon, Woong-Gi;Kim, Tae-Yun;Kim, Sung Jun;Choi, Heung-Kuk;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.7
    • /
    • pp.829-840
    • /
    • 2013
  • In this paper, we propose a new lung segmentation method for chest x-ray images which can take both global and local properties into account. Firstly, the initial lung segmentation is computed by applying the active shape model (ASM) which keeps the shape of deformable model from the pre-learned model and searches the image boundaries. At the second segmentation stage, we also applied the localizing region-based active contour model (LRACM) for correcting various regional errors in the initial segmentation. Finally, to measure the similarities, we calculated the Dice coefficient of the segmented area using each semiautomatic method with the result of the manually segmented area by a radiologist. The comparison experiments were performed using 5 lung x-ray images. In our experiment, the Dice coefficient with manually segmented area was $95.33%{\pm}0.93%$ for the proposed method. Effective segmentation methods will be essential for the development of computer-aided diagnosis systems for a more accurate early diagnosis and prognosis regarding lung cancer in chest x-ray images.

3D Face Alignment and Normalization Based on Feature Detection Using Active Shape Models : Quantitative Analysis on Aligning Process (ASMs을 이용한 특징점 추출에 기반한 3D 얼굴데이터의 정렬 및 정규화 : 정렬 과정에 대한 정량적 분석)

  • Shin, Dong-Won;Park, Sang-Jun;Ko, Jae-Pil
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.6
    • /
    • pp.403-411
    • /
    • 2008
  • The alignment of facial images is crucial for 2D face recognition. This is the same to facial meshes for 3D face recognition. Most of the 3D face recognition methods refer to 3D alignment but do not describe their approaches in details. In this paper, we focus on describing an automatic 3D alignment in viewpoint of quantitative analysis. This paper presents a framework of 3D face alignment and normalization based on feature points obtained by Active Shape Models (ASMs). The positions of eyes and mouth can give possibility of aligning the 3D face exactly in three-dimension space. The rotational transform on each axis is defined with respect to the reference position. In aligning process, the rotational transform converts an input 3D faces with large pose variations to the reference frontal view. The part of face is flopped from the aligned face using the sphere region centered at the nose tip of 3D face. The cropped face is shifted and brought into the frame with specified size for normalizing. Subsequently, the interpolation is carried to the face for sampling at equal interval and filling holes. The color interpolation is also carried at the same interval. The outputs are normalized 2D and 3D face which can be used for face recognition. Finally, we carry two sets of experiments to measure aligning errors and evaluate the performance of suggested process.

Three-dimensional Model Generation for Active Shape Model Algorithm (능동모양모델 알고리듬을 위한 삼차원 모델생성 기법)

  • Lim, Seong-Jae;Jeong, Yong-Yeon;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.28-35
    • /
    • 2006
  • Statistical models of shape variability based on active shape models (ASMs) have been successfully utilized to perform segmentation and recognition tasks in two-dimensional (2D) images. Three-dimensional (3D) model-based approaches are more promising than 2D approaches since they can bring in more realistic shape constraints for recognizing and delineating the object boundary. For 3D model-based approaches, however, building the 3D shape model from a training set of segmented instances of an object is a major challenge and currently it remains an open problem in building the 3D shape model, one essential step is to generate a point distribution model (PDM). Corresponding landmarks must be selected in all1 training shapes for generating PDM, and manual determination of landmark correspondences is very time-consuming, tedious, and error-prone. In this paper, we propose a novel automatic method for generating 3D statistical shape models. Given a set of training 3D shapes, we generate a 3D model by 1) building the mean shape fro]n the distance transform of the training shapes, 2) utilizing a tetrahedron method for automatically selecting landmarks on the mean shape, and 3) subsequently propagating these landmarks to each training shape via a distance labeling method. In this paper, we investigate the accuracy and compactness of the 3D model for the human liver built from 50 segmented individual CT data sets. The proposed method is very general without such assumptions and can be applied to other data sets.