• 제목/요약/키워드: facial recognition

검색결과 711건 처리시간 0.029초

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Discrimination of Emotional States In Voice and Facial Expression

  • Kim, Sung-Ill;Yasunari Yoshitomi;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • 제21권2E호
    • /
    • pp.98-104
    • /
    • 2002
  • The present study describes a combination method to recognize the human affective states such as anger, happiness, sadness, or surprise. For this, we extracted emotional features from voice signals and facial expressions, and then trained them to recognize emotional states using hidden Markov model (HMM) and neural network (NN). For voices, we used prosodic parameters such as pitch signals, energy, and their derivatives, which were then trained by HMM for recognition. For facial expressions, on the other hands, we used feature parameters extracted from thermal and visible images, and these feature parameters were then trained by NN for recognition. The recognition rates for the combined parameters obtained from voice and facial expressions showed better performance than any of two isolated sets of parameters. The simulation results were also compared with human questionnaire results.

Fourier 변환된 얼굴의 진폭스펙트럼의 karhunen-loeve 근사 방법에 기초한 변위불변적 얼굴인식 (Shift-invariant face recognition based on the karhunen-loeve approximationof amplitude spectra of fourier-transformed faces)

  • 심영미;장주석;김종규
    • 전자공학회논문지C
    • /
    • 제35C권3호
    • /
    • pp.97-107
    • /
    • 1998
  • In face recognition based on the Karhunen-Loeve approximation, amplitudespectra of Fourier transformed facial images were used. We found taht the use of amplitude spetra gives not only the shift-invariance property but also some improvment of recognition rate. This is because the distance between the varing faces of a person compared with that between the different persons perfomed computer experiments on face recognitio with varing facial images obtained from total 55 male and 25 females. We confirmed that the use of amplitude spectra of Fourier-trnsformed facial imagesgives better recognition rate for avariety of varying facial images including shifted ones than the use of direct facial images does.

  • PDF

얼굴 인식을 통한 동적 감정 분류 (Dynamic Emotion Classification through Facial Recognition)

  • 한우리;이용환;박제호;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제12권3호
    • /
    • pp.53-57
    • /
    • 2013
  • Human emotions are expressed in various ways. It can be expressed through language, facial expression and gestures. In particular, the facial expression contains many information about human emotion. These vague human emotion appear not in single emotion, but in combination of various emotion. This paper proposes a emotional expression algorithm using Active Appearance Model(AAM) and Fuzz k- Nearest Neighbor which give facial expression in similar with vague human emotion. Applying Mahalanobis distance on the center class, determine inclusion level between center class and each class. Also following inclusion level, appear intensity of emotion. Our emotion recognition system can recognize a complex emotion using Fuzzy k-NN classifier.

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • 제27권1호
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

표정별 가버 웨이블릿 주성분특징을 이용한 실시간 표정 인식 시스템 (Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features)

  • 윤현섭;한영준;한헌수
    • 한국지능시스템학회논문지
    • /
    • 제19권6호
    • /
    • pp.821-827
    • /
    • 2009
  • 표정은 인간의 감정을 전달할 수 있는 중요한 수단으로 표정 인식은 감정상태를 알아낼 수 있는 효과적인 방법중 하나이다. 일반적인 표정 인식 시스템은 얼굴 표정을 표현하는 특징점을 찾고, 물리적인 해석 없이 특징을 추출한다. 하지만 특징점 추출은 많은 시간이 소요될 뿐 아니라 특징점의 정확한 위치를 추정하기 어렵다. 그리고 표정 인식 시스템을 실시간 임베디드 시스템에서 구현하기 위해서는 알고리즘을 간략화하고 자원 사용량을 줄일 필요가 있다. 본 논문에서 제안하는 실시간 표정 인식 시스템은 격자점 위치에서 얻어진 가버 웨이블릿(Gabor wavelet) 특징 기반 표정 공간을 설정하고, 각 표정 공간에서 얻어진 주성분을 신경망 분류기를 이용하여 얼굴 표정을 분류한다. 제안하는 실시간 표정 인식 시스템은 화남, 행복, 평온, 슬픔 그리고 놀람의 5가지 표정이 인식 가능하며, 다양한 실험에서 평균 10.25ms의 수행시간, 그리고 87%~93%의 인식 성능을 보였다.

SIFT 기술자를 이용한 얼굴 표정인식 (Facial Expression Recognition Using SIFT Descriptor)

  • 김동주;이상헌;손명규
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제5권2호
    • /
    • pp.89-94
    • /
    • 2016
  • 본 논문에서는 SIFT 기술자를 이용한 얼굴 특징과 SVM 분류기로 표정인식을 수행하는 방법에 대하여 제안한다. 기존 SIFT 기술자는 물체 인식 분야에 있어 키포인트 검출 후, 검출된 키포인트에 대한 특징 기술자로써 주로 사용되나, 본 논문에서는 SIFT 기술자를 얼굴 표정인식의 특징벡터로써 적용하였다. 표정인식을 위한 특징은 키포인트 검출 과정 없이 얼굴영상을 서브 블록 영상으로 나누고 각 서브 블록 영상에 SIFT 기술자를 적용하여 계산되며, 표정분류는 SVM 알고리즘으로 수행된다. 성능평가는 기존의 LBP 및 LDP와 같은 이진패턴 특징기반의 표정인식 방법과 비교 수행되었으며, 실험에는 공인 CK 데이터베이스와 JAFFE 데이터베이스를 사용하였다. 실험결과, SIFT 기술자를 이용한 제안방법은 기존방법보다 CK 데이터베이스에서 6.06%의 향상된 인식결과를 보였으며, JAFFE 데이터베이스에서는 3.87%의 성능향상을 보였다.

얼굴의 다양한 포즈 및 표정의 변환에 따른 얼굴 인식률 향상에 관한 연구 (A Study on Improvement of Face Recognition Rate with Transformation of Various Facial Poses and Expressions)

  • 최재영;황보 택근;김낙빈
    • 인터넷정보학회논문지
    • /
    • 제5권6호
    • /
    • pp.79-91
    • /
    • 2004
  • 다양한 얼굴 포즈 검출 및 인식은 매우 어려운 문제로서, 이는 특징 공간상의 다양한 포즈의 분포가 정면 영상에 비해 매우 흩어져있고 복잡하기 때문이다. 이에 본 논문에서는 기존의 얼굴 인식 방법들이 제한 사항으로 두었던 입력 영상의 다양한 포즈 및 표정에 강인한 얼굴 인식 시스템을 제안하였다. 제안한 방법은 먼저, TLS 모델을 사용하여 얼굴 영역을 검출한 뒤, 얼굴의 구성요소를 통하여 얼굴 포즈를 추정한다. 추정된 얼굴 포즈는 3차원 X-Y-Z축으로 분해되는데, 두 번째 과정에서는 추정된 벡터를 통하여 만들어진 가변 템플릿과 3D CAN/DIDE모델을 이용하여 얼굴을 정합한다 마지막으로 정합된 얼굴은 분석된 포즈와 표정에 의하여 얼굴 인식에 적합한 정면의 정규화 된 얼굴로 변환된다. 실험을 통하여 얼굴 검출 모델의 사용과 포즈 추정 방법의 타당성을 보였으며, 포즈 및 표정 정규화를 통하여 인식률이 향상됨을 확인하였다.

  • PDF

효과적인 얼굴 인식을 위한 특징 분포 및 적응적 인식기 (Feature Variance and Adaptive classifier for Efficient Face Recognition)

  • ;남미영;이필규
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2007년도 추계학술발표대회
    • /
    • pp.34-37
    • /
    • 2007
  • Face recognition is still a challenging problem in pattern recognition field which is affected by different factors such as facial expression, illumination, pose etc. The facial feature such as eyes, nose, and mouth constitute a complete face. Mouth feature of face is under the undesirable effect of facial expression as many factors contribute the low performance. We proposed a new approach for face recognition under facial expression applying two cascaded classifiers to improve recognition rate. All facial expression images are treated by general purpose classifier at first stage. All rejected images (applying threshold) are used for adaptation using GA for improvement in recognition rate. We apply Gabor Wavelet as a general classifier and Gabor wavelet with Genetic Algorithm for adaptation under expression variance to solve this issue. We have designed, implemented and demonstrated our proposed approach addressing this issue. FERET face image dataset have been chosen for training and testing and we have achieved a very good success.

Facial Emotion Recognition in Older Adults With Cognitive Complaints

  • YongSoo Shim
    • 대한치매학회지
    • /
    • 제22권4호
    • /
    • pp.158-168
    • /
    • 2023
  • Background and Purpose: Facial emotion recognition deficits impact the daily life, particularly of Alzheimer's disease patients. We aimed to assess these deficits in the following three groups: subjective cognitive decline (SCD), mild cognitive impairment (MCI), and mild Alzheimer's dementia (AD). Additionally, we explored the associations between facial emotion recognition and cognitive performance. Methods: We used the Korean version of the Florida Facial Affect Battery (K-FAB) in 72 SCD, 76 MCI, and 76 mild AD subjects. The comparison was conducted using the analysis of covariance (ANCOVA), with adjustments being made for age and sex. The Mini-Mental State Examination (MMSE) was utilized to gauge the overall cognitive status, while the Seoul Neuropsychological Screening Battery (SNSB) was employed to evaluate the performance in the following five cognitive domains: attention, language, visuospatial abilities, memory, and frontal executive functions. Results: The ANCOVA results showed significant differences in K-FAB subtests 3, 4, and 5 (p=0.001, p=0.003, and p=0.004, respectively), especially for anger and fearful emotions. Recognition of 'anger' in the FAB subtest 5 declined from SCD to MCI to mild AD. Correlations were observed with age and education, and after controlling for these factors, MMSE and frontal executive function were associated with FAB tests, particularly in the FAB subtest 5 (r=0.507, p<0.001 and r=-0.288, p=0.026, respectively). Conclusions: Emotion recognition deficits worsened from SCD to MCI to mild AD, especially for negative emotions. Complex tasks, such as matching, selection, and naming, showed greater deficits, with a connection to cognitive impairment, especially frontal executive dysfunction.