• Title/Summary/Keyword: face expression recognition

Search Result 197, Processing Time 0.026 seconds

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

Facial Characteristic Point Extraction for Representation of Facial Expression (얼굴 표정 표현을 위한 얼굴 특징점 추출)

  • Oh, Jeong-Su;Kim, Jin-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.1
    • /
    • pp.117-122
    • /
    • 2005
  • This paper proposes an algorithm for Facial Characteristic Point(FCP) extraction. The FCP plays an important role in expression representation for face animation, avatar mimic or facial expression recognition. Conventional algorithms extract the FCP with an expensive motion capture device or by using markers, which give an inconvenience or a psychological load to experimental person. However, the proposed algorithm solves the problems by using only image processing. For the efficient FCP extraction, we analyze and improve the conventional algorithms detecting facial components, which are basis of the FCP extraction.

The relationship between autistic features and empathizing-systemizing traits (자폐성향과 공감-체계화능력 간의 관계)

  • Cho, Kyung-Ja;Kim, Jung-K.
    • Science of Emotion and Sensibility
    • /
    • v.14 no.2
    • /
    • pp.245-256
    • /
    • 2011
  • This study consists of two sections to figure out the relationship between autistic features and empathizing-systemizing traits. For the first section, the research involved 355 university students to measure their EQ, SQ-R and AQ. As a result, it is found that AQ was negatively correlated to EQ, and D score(relative difference between EQ and SQ-R of the individuals), but it was not significantly related to SQ-R. It means that the subject has high AQ if he has relatively lower EQ than SQ-R. For the second section, the subjects were divided into two groups based on their AQ score; the subjects who had a tendency of autism and the subjects who did not. The test measured how these two groups were different in terms of facial expressions' recognition according to the tendency of autism, facial expression presenting areas(whole face, eyes-alone, mouth-alone) and different types of emotions(basic and complex emotions). As a result, the subjects who had a tendency of autism were lower at judging facial expressions than the subjects who did not. Also, the results showed that the subjects judged better on the condition of basic emotions more than complex emotions, the whole face more than eyes-alone and mouth-alone. Especially, for the eyes-alone condition, the subjects who had a tendency of autism were lower at judging facial expressions than the subjects who did not. This study suggests that empathizing traits and facial expressions' recognition are related to the tendency of autism.

  • PDF

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

A Face Recognition Method Robust to Variations in Lighting and Facial Expression (조명 변화, 얼굴 표정 변화에 강인한 얼굴 인식 방법)

  • Yang, Hui-Seong;Kim, Yu-Ho;Lee, Jun-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.2
    • /
    • pp.192-200
    • /
    • 2001
  • 본 논문은 조명 변화, 표정 변화, 부분적인 오클루전이 있는 얼굴 영상에 강인하고 적은 메모리양과 계산량을 갖는 효율적인 얼굴 인식 방법을 제안한다. SKKUface(Sungkyunkwan University face)라 명명한 이 방법은 먼저 훈련 영상에 PCA(principal component analysis)를 적용하여 차원을 줄일 때 구해지는 특징 벡터 공간에서 조명 변화, 얼굴 표정 변화 등에 해당되는 공간이 최대한 제외된 새로운 특징 벡터 공간을 생성한다. 이러한 특징 벡터 공간은 얼굴의 고유특징만을 주로 포함하는 벡터 공간이므로 이러한 벡터 공간에 Fisher linear discriminant를 적용하면 클래스간의 더욱 효과적인 분리가 이루어져 인식률을 획기적으로 향상시킨다. 또한, SKKUface 방법은 클래스간 분산(between-class covariance) 행렬과 클래스내 분산(within-class covariance) 행렬을 계산할 때 문제가 되는 메모리양과 계산 시간을 획기적으로 줄이는 방법을 제안하여 적용하였다. 제안된 SKKUface 방법의 얼굴 인식 성능을 평가하기 위하여 YALE, SKKU, ORL(Olivetti Research Laboratory) 얼굴 데이타베이스를 가지고 기존의 얼굴 인식 방법으로 널리 알려진 Eigenface 방법, Fisherface 방법과 함께 인식률을 비교 평가하였다. 실험 결과, 제안된 SKKUface 방법이 조명 변화, 부분적인 오클루전이 있는 얼굴 영상에 대해서 Eigenface 방법과 Fisherface 방법에 비해 인식률이 상당히 우수함을 알 수 있었다.

  • PDF

Interaction Intent Analysis of Multiple Persons using Nonverbal Behavior Features (인간의 비언어적 행동 특징을 이용한 다중 사용자의 상호작용 의도 분석)

  • Yun, Sang-Seok;Kim, Munsang;Choi, Mun-Taek;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.738-744
    • /
    • 2013
  • According to the cognitive science research, the interaction intent of humans can be estimated through an analysis of the representing behaviors. This paper proposes a novel methodology for reliable intention analysis of humans by applying this approach. To identify the intention, 8 behavioral features are extracted from the 4 characteristics in human-human interaction and we outline a set of core components for nonverbal behavior of humans. These nonverbal behaviors are associated with various recognition modules including multimodal sensors which have each modality with localizing sound source of the speaker in the audition part, recognizing frontal face and facial expression in the vision part, and estimating human trajectories, body pose and leaning, and hand gesture in the spatial part. As a post-processing step, temporal confidential reasoning is utilized to improve the recognition performance and integrated human model is utilized to quantitatively classify the intention from multi-dimensional cues by applying the weight factor. Thus, interactive robots can make informed engagement decision to effectively interact with multiple persons. Experimental results show that the proposed scheme works successfully between human users and a robot in human-robot interaction.

Facial Expression Classification Using Deep Convolutional Neural Network (깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법)

  • Choi, In-kyu;Song, Hyok;Lee, Sangyong;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.162-172
    • /
    • 2017
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. To overcome the disadvantages of existing facial expression databases, various databases are used. In the proposed technique, we construct six facial expression data sets such as 'expressionless', 'happiness', 'sadness', 'angry', 'surprise', and 'disgust'. Pre-processing and data augmentation techniques are also applied to improve efficient learning and classification performance. In the existing CNN structure, the optimal CNN structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of fully-connected layer nodes. Experimental results show that the proposed scheme achieves the highest classification performance of 96.88% while it takes the least time to pass through the CNN structure compared to other models.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Facial Expression Recognition using the geometric features of the face (얼굴의 기하학적 특징을 이용한 표정 인식)

  • Woo, hyo-jeong;Lee, seul-gi;Kim, dong-woo;Song, Yeong-Jun;Ahn, jae-hyeong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2013.05a
    • /
    • pp.289-290
    • /
    • 2013
  • 이 논문은 얼굴의 기하학적 특징을 이용한 표정인식 시스템을 제안한다. 먼저 얼굴 인식 시스템으로 Haar-like feature의 특징 마스크를 이용한 방법을 적용하였다 인식된 얼굴은 눈을 포함하고 있는 얼굴 상위 부분과 입을 포함하고 있는 얼굴 하위 부분으로 분리한다. 그래서 얼굴 요소 추출에 용이하게 된다. 얼굴 요소 추출은 PCA를 통한 고유 얼굴의 고유 눈과 고유 입의 템플릿 매칭으로 추출하였다. 얼굴 요소는 눈과 입이 있으며 두 요소의 기하학적 특징을 통하여 표정을 인식한다. 눈과 입의 특징 값은 실험을 통하여 정한 각 표정별 임계 값과 비교하여 표정이 인식된다. 본 논문은 기존의 논문에서 거의 사용하지 않는 눈동자의 비율을 적용하여 기존의 표정인식 알고리즘보다 인식률을 높이는 방향으로 제안되었다. 실험결과 기존의 논문보다 인식률이 개선됨을 확인 할 수 있었다.

  • PDF

Effects of Image compression on face recognition (영상압축이 얼굴인식에 미치는 영향)

  • Kim, Chang-Han;Kim, Ji-Hoon;Lee, Chul-Hee
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.447-448
    • /
    • 2007
  • 얼굴인식은 카메라로부터 영상을 취득하여, 이미 등록되어 있는 영상 중에 가장 높은 유사성을 보이는 영상을 취득 영상의 대상자로 판단하는 것이다. 정확한 판단을 위해서 일반적으로 얼굴인식에 사용되는 영상은 취득 당시 영상의 정보를 모두 가지고 있는 영상, 즉 압축되지 않은 영상을 사용한다. 하지만 대용량 데이터 얼굴인식 시스템에서는 저장 공간의 이유로 영상에 압축을 해야 하는 상황이 발생할 수 있다. 본 논문에서는 영상압축이 얼굴인식에 미치는 영향에 대해서 고찰한다. 영상압축을 위해 사용되는 압축 형식은 JPEG, JPEG2K, SPIHT 이다. 동일은 형식의 얼굴인식의 알고리즘이라도 취득된 얼굴영상의 형식에 따라 인식률에 차이가 발생한다. 얼굴의 조명(illumination), 표정(expression), 자세(pose)는 인식률에 영향을 미치는 대표적인 요인이다. 따라서 얼굴인식이 압축에 미치는 영향을 설명하기 위해 영상별로 조명조건이 차이가 나는 데이터를 사용하여 실험하였다.

  • PDF