• Title/Summary/Keyword: active appearance model

Search Result 69, Processing Time 0.026 seconds

Face Tracking System using Active Appearance Model (Active Appearance Model을 이용한 얼굴 추적 시스템)

  • Cho, Kyoung-Sic;Kim, Yong-Guk
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1044-1049
    • /
    • 2006
  • 얼굴 추적은 Vision base HCI의 핵심인 얼굴인식, 표정인식 그리고 Gesture recognition등의 다른 여러 기술을 지원하는 중요한 기술이다. 이런 얼굴 추적기술에는 영상(Image)의 Color또는 Contour등의 불변하는 특징들을 사용 하거나 템플릿(template)또는 형태(appearance)를 사용하는 방법 등이 있는데 이런 방법들은 조명환경이나 주위 배경등의 외부 환경에 민감하게 반응함으로 해서 다양한 환경에 사용할 수 없을 뿐더러 얼굴영상만을 정확하게 추출하기도 쉽지 않은 실정이다. 이에 본 논문에서는 deformable한 model을 사용하여 model과 유사한 shape과 appearance를 찾아 내는 AAM(Active Appearance Model)을 사용하는 얼굴 추적 시스템을 제안하고자 한다. 제안된 시스템에는 기존의 Combined AAM이 아닌 Independent AAM을 사용하였고 또한 Fitting Algorithm에 Inverse Compositional Image Alignment를 사용하여 Fitting 속도를 향상 시켰다. AAM Model을 만들기 위한 Train set은 150장의 4가지 형태에 얼굴을 담고 있는 Gray-scale 영상을 사용 하였다. Shape Model은 각 영상마다 직접 표기한 47개의 Vertex를 Trianglize함으로서 생성되는 71개의 Triangles을 하나의 Mesh로 구성하여 생성 하였고, Appearance Model은 Shape 안쪽의 모든 픽셀을 사용해서 생성하였다. 시스템의 성능 평가는 Fitting후 Shape 좌표의 정확도를 측정 함으로서 평가 하였다.

  • PDF

Facial Feature Tracking Using Adaptive Particle Filter and Active Appearance Model (Adaptive Particle Filter와 Active Appearance Model을 이용한 얼굴 특징 추적)

  • Cho, Durkhyun;Lee, Sanghoon;Suh, Il Hong
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.2
    • /
    • pp.104-115
    • /
    • 2013
  • For natural human-robot interaction, we need to know location and shape of facial feature in real environment. In order to track facial feature robustly, we can use the method combining particle filter and active appearance model. However, processing speed of this method is too slow. In this paper, we propose two ideas to improve efficiency of this method. The first idea is changing the number of particles situationally. And the second idea is switching the prediction model situationally. Experimental results is presented to show that the proposed method is about three times faster than the method combining particle filter and active appearance model, whereas the performance of the proposed method is maintained.

Facial Feature Extraction using Multiple Active Appearance Model (Multiple Active Appearance Model을 이용한 얼굴 특징 추출 기법)

  • Park, Hyun-Jun;Kim, Kwang-Baek;Cha, Eui-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1201-1206
    • /
    • 2013
  • Active Appearance Model(AAM) is one of the facial feature extraction techniques. In this paper, we propose the Multiple Active Appearance Model(MAAM). Proposed method uses two AAMs. Each AAM trains using different training parameters. It causes that each AAM has different strong points. One AAM complements the weak points in the other AAM. We performed the facial feature extraction on the 100 images to verify the performance of MAAM. Experiment results show that MAAM gives more accurate results than AAM with less fitting iteration.

3D Active Appearance Model for Face Recognition (얼굴인식을 위한 3D Active Appearance Model)

  • Cho, Kyoung-Sic;Kim, Yong-Guk
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1006-1011
    • /
    • 2007
  • Active Appearance Models은 객체의 모델링에 널리 사용되며, 특히 얼굴 모델은 얼굴 추적, 포즈 인식, 표정 인식, 그리고 얼굴 인식에 널리 사용되고 있다. 최초의 AAM은 Shape과 Appearance가 하나의 계수에 의해서 만들어 지는 Combined AAM이였고, 이후 Shape과 Appearance의 계수가 분리된 Independent AAM과 3D를 표현할 수 있는 Combined 2D+3D AAM이 개발 되었다. 비록 Combined 2D+3D AAM이 3D를 표현 할 수 있을지라도 이들은 공통적으로 2D 영상을 사용하여 모델을 생산한다. 본 논문에서 우리는 stereo-camera based 3D face capturing device를 통해 획득한 3D 데이터를 기반으로 하는 3D AAM을 제안한다. 우리의 3D AAM은 3D정보를 이용해 모델을 생산하므로 기존의 AAM보다 정확한 3D표현이 가능하고 Alignment Algorithm으로 Inverse Compositional Image Alignment(ICIA)를 사용하여 빠르게 Model Instance를 생산할 수 있다. 우리는 3D AAM을 평가하기 위해 stereo-camera based 3D face capturing device로 촬영해 수집한 한국인 얼굴 데이터베이스[9]로 얼굴인식을 수행하였다.

  • PDF

Localizing Head and Shoulder Line Using Statistical Learning (통계학적 학습을 이용한 머리와 어깨선의 위치 찾기)

  • Kwon, Mu-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.141-149
    • /
    • 2007
  • Associating the shoulder line with head location of the human body is useful in verifying, localizing and tracking persons in an image. Since the head line and the shoulder line, what we call ${\Omega}$-shape, move together in a consistent way within a limited range of deformation, we can build a statistical shape model using Active Shape Model (ASM). However, when the conventional ASM is applied to ${\Omega}$-shape fitting, it is very sensitive to background edges and clutter because it relies only on the local edge or gradient. Even though appearance is a good alternative feature for matching the target object to image, it is difficult to learn the appearance of the ${\Omega}$-shape because of the significant difference between people's skin, hair and clothes, and because appearance does not remain the same throughout the entire video. Therefore, instead of teaming appearance or updating appearance as it changes, we model the discriminative appearance where each pixel is classified into head, torso and background classes, and update the classifier to obtain the appropriate discriminative appearance in the current frame. Accordingly, we make use of two features in fitting ${\Omega}$-shape, edge gradient which is used for localization, and discriminative appearance which contributes to stability of the tracker. The simulation results show that the proposed method is very robust to pose change, occlusion, and illumination change in tracking the head and shoulder line of people. Another advantage is that the proposed method operates in real time.

Recognition of Human Facial Expression in a Video Image using the Active Appearance Model

  • Jo, Gyeong-Sic;Kim, Yong-Guk
    • Journal of Information Processing Systems
    • /
    • v.6 no.2
    • /
    • pp.261-268
    • /
    • 2010
  • Tracking human facial expression within a video image has many useful applications, such as surveillance and teleconferencing, etc. Initially, the Active Appearance Model (AAM) was proposed for facial recognition; however, it turns out that the AAM has many advantages as regards continuous facial expression recognition. We have implemented a continuous facial expression recognition system using the AAM. In this study, we adopt an independent AAM using the Inverse Compositional Image Alignment method. The system was evaluated using the standard Cohn-Kanade facial expression database, the results of which show that it could have numerous potential applications.

Real-time Facial Expression recognition System using Active Appearance Model and EFM (Active Appearance Model과 EFM을 이용한 실시간 얼굴 표정 인식 시스템)

  • Cho, Kyoung-Sic;Kim, Y.G.;Lee, Y.B.
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.437-441
    • /
    • 2006
  • 본 논문에서는 Active Appearance Model과 EFM을 기반으로 하는 실시간 표정인식 시스템을 설명한다. AAM은 얼굴추적, 얼굴인식 그리고 물체인식과 같은 시스템에 널리 사용되어 지고 있다. 시스템에 사용된 AAM은 Inverse Compositional Image Alignment를 적용한Independent AAM으로서 fitting 속도가 빨라 실시간 시스템에 매우 효과 적이다. 시스템의 성능 평가는 Cohn-Kanade Image DB의 표정영상과 연속영상을 사용하여 실시 하였다.

  • PDF

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

Human Face Tracking and Modeling using Active Appearance Model with Motion Estimation

  • Tran, Hong Tai;Na, In Seop;Kim, Young Chul;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.49-56
    • /
    • 2017
  • Images and Videos that include the human face contain a lot of information. Therefore, accurately extracting human face is a very important issue in the field of computer vision. However, in real life, human faces have various shapes and textures. To adapt to these variations, A model-based approach is one of the best ways in which unknown data can be represented by the model in which it is built. However, the model-based approach has its weaknesses when the motion between two frames is big, it can be either a sudden change of pose or moving with fast speed. In this paper, we propose an enhanced human face-tracking model. This approach included human face detection and motion estimation using Cascaded Convolutional Neural Networks, and continuous human face tracking and modeling correction steps using the Active Appearance Model. A proposed system detects human face in the first input frame and initializes the models. On later frames, Cascaded CNN face detection is used to estimate the target motion such as location or pose before applying the old model and fit new target.

Collaborative Local Active Appearance Models for Illuminated Face Images (조명얼굴 영상을 위한 협력적 지역 능동표현 모델)

  • Yang, Jun-Young;Ko, Jae-Pil;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.816-824
    • /
    • 2009
  • In the face space, face images due to illumination and pose variations have a nonlinear distribution. Active Appearance Models (AAM) based on the linear model have limits to the nonlinear distribution of face images. In this paper, we assume that a few clusters of face images are given; we build local AAMs according to the clusters of face images, and then select a proper AAM model during the fitting phase. To solve the problem of updating fitting parameters among the models due to the model changing, we propose to build in advance relationships among the clusters in the parameter space from the training images. In addition, we suggest a gradual model changing to reduce improper model selections due to serious fitting failures. In our experiment, we apply the proposed model to Yale Face Database B and compare it with the previous method. The proposed method demonstrated successful fitting results with strongly illuminated face images of deep shadows.