특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression

  • 노성규 (한양대학교 전자통신컴퓨터공학과 가상현실 연구실) ;
  • 박한훈 (한양대학교 전자통신컴퓨터공학과 가상현실 연구실) ;
  • 신홍창 (한양대학교 전자통신컴퓨터공학과 가상현실 연구실) ;
  • 진윤종 (한양대학교 전자통신컴퓨터공학과 가상현실 연구실) ;
  • 박종일 (한양대학교 전자통신컴퓨터공학과 가상현실 연구실)
  • 발행 : 2007.02.05

초록

Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

키워드