• Title/Summary/Keyword: Facial expression

Search Result 627, Processing Time 0.027 seconds

High Efficiency Adaptive Facial Expression Recognition based on Incremental Active Semi-Supervised Learning (점진적 능동준지도 학습 기반 고효율 적응적 얼굴 표정 인식)

  • Kim, Jin-Woo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.165-171
    • /
    • 2017
  • It is difficult to recognize Human's facial expression in the real-world. For these reason, when database and test data have similar condition, we can accomplish high accuracy. Solving these problem, we need to many facial expression data. In this paper, we propose the algorithm for gathering many facial expression data within various environment and gaining high accuracy quickly. This algorithm is training initial model with the ASSL (Active Semi-Supervised Learning) using deep learning network, thereafter gathering unlabeled facial expression data and repeating this process. Through using the ASSL, we gain proper data and high accuracy with less labor force.

Feature Extraction Method of 2D-DCT for Facial Expression Recognition (얼굴 표정인식을 위한 2D-DCT 특징추출 방법)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.3
    • /
    • pp.135-138
    • /
    • 2014
  • This paper devices a facial expression recognition method robust to overfitting using 2D-DCT and EHMM algorithm. In particular, this paper achieves enhanced recognition performance by setting up a large window size for 2D-DCT feature extraction and extracting the observation vectors of EHMM. The experimental results on the CK facial expression database and the JAFFE facial expression database showed that the facial expression recognition accuracy was improved according as window size is large. Also, the proposed method revealed the recognition accuracy of 87.79% and showed enhanced recognition performance ranging from 46.01% to 50.05% in comparison to previous approaches based on histogram feature, when CK database is employed for training and JAFFE database is used to test the recognition accuracy.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

A study on the Effect of Surface Processing and Expression Elements of Game Characters on the Uncanny Valley Phenomenon (게임 캐릭터의 표면처리와 표현요소가 Uncanny Valley 현상에 미치는 영향에 관한 연구)

  • Yin, Shuo Han;Kwon, Mahn Woo;Hwang, Mi Kyung
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.7
    • /
    • pp.964-972
    • /
    • 2022
  • The Uncanny Valley phenomenon has already been deemed as theoretical, and the characteristics of game character expression elements for the Uncanny Valley phenomenon were recognized through case analysis as well. By theoretical consideration and case studies, it was found out that the influential elements of the Uncanny Valley phenomenon can be classified as two primary factors: character surface treatment and facial expression animation. The prepared experimental materials and adjectives were measured to be Five-Point Likert Scale. The measured results were evaluated for both influence and comparative analysis through essential statistical analysis and Repeated Measuring ANOVA in SPSS. The conclusions which were drawn from this research are as follows: The surface treatment of characters did not substantially affect the Uncanny Valley phenomenon. Instead, character's expression animation had a significant impact on the Uncanny Valley phenomenon, which also led to another conclusion that the facial expression animation had an overall deeper impact on Uncanny Valley phenomenon compared with character's surface treatment. It was the unnatural facial expression animation that controlled all of the independent variables and also caused the Uncanny Valley phenomenon. In order for game characters to evade the Uncanny Valley phenomenon and enhance game immersion, the facial expression animation of the character must be done spontaneously.

Facial Expression Recognition Using SIFT Descriptor (SIFT 기술자를 이용한 얼굴 표정인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.89-94
    • /
    • 2016
  • This paper proposed a facial expression recognition approach using SIFT feature and SVM classifier. The SIFT was generally employed as feature descriptor at key-points in object recognition fields. However, this paper applied the SIFT descriptor as feature vector for facial expression recognition. In this paper, the facial feature was extracted by applying SIFT descriptor at each sub-block image without key-point detection procedure, and the facial expression recognition was performed using SVM classifier. The performance evaluation was carried out through comparison with binary pattern feature-based approaches such as LBP and LDP, and the CK facial expression database and the JAFFE facial expression database were used in the experiments. From the experimental results, the proposed method using SIFT descriptor showed performance improvements of 6.06% and 3.87% compared to previous approaches for CK database and JAFFE database, respectively.

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

The Congruent Effects of Gesture and Facial Expression of Virtual Character on Emotional Perception: What Facial Expression is Significant? (가상 캐릭터의 몸짓과 얼굴표정의 일치가 감성지각에 미치는 영향: 어떤 얼굴표정이 중요한가?)

  • Ryu, Jeeheon;Yu, Seungbeom
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.21-34
    • /
    • 2016
  • In the design and develop a virtual character, it is important to correctly deliver target emotions generated by the combination of facial expression and gesture. The purpose of this study is to examine the effect of congruent/incongruent between gesture and facial expression on target emotion. In this study four emotions, joy, sadness, fear, and anger, are applied. The results of study showed that sadness emotion were incorrectly perceived. Moreover, it was perceived as anger instead of sadness. Sadness can be easily confused when facial expression and gestures were simultaneously presented. However, in the other emotional status, the intended emotional expressions were correctly perceived. The overall evaluation of virtual character's emotional expression was significantly low when joy gesture was combined with sad facial expression. The results of this study suggested that emotional gesture is more influential correctly to deliver target emotions to users. This study suggested that social cues like gender or age of virtual character should be further studied.

A Recognition Framework for Facial Expression by Expression HMM and Posterior Probability (표정 HMM과 사후 확률을 이용한 얼굴 표정 인식 프레임워크)

  • Kim, Jin-Ok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.3
    • /
    • pp.284-291
    • /
    • 2005
  • I propose a framework for detecting, recognizing and classifying facial features based on learned expression patterns. The framework recognizes facial expressions by using PCA and expression HMM(EHMM) which is Hidden Markov Model (HMM) approach to represent the spatial information and the temporal dynamics of the time varying visual expression patterns. Because the low level spatial feature extraction is fused with the temporal analysis, a unified spatio-temporal approach of HMM to common detection, tracking and classification problems is effective. The proposed recognition framework is accomplished by applying posterior probability between current visual observations and previous visual evidences. Consequently, the framework shows accurate and robust results of recognition on as well simple expressions as basic 6 facial feature patterns. The method allows us to perform a set of important tasks such as facial-expression recognition, HCI and key-frame extraction.