• Title/Summary/Keyword: Face expression

Search Result 453, Processing Time 0.024 seconds

Developmental Changes in Emotional-States and Facial Expression (정서 상태와 얼굴표정간의 연결 능력의 발달)

  • Park, Soo-Jin;Song, In-Hae;Ghim, Hei-Rhee;Cho, Kyung-Ja
    • Science of Emotion and Sensibility
    • /
    • v.10 no.1
    • /
    • pp.127-133
    • /
    • 2007
  • The present study investigated whether the emotional states reading ability through facial expression changes by age(3-, 5-year-old and university student groups), sex(male, female), facial expression's presenting areas(face, eyes) and the type of emotions(basic emotions, complex emotions). 32 types of emotional state's facial expressions which are linked relatively strong with the emotional vocabularies were used as stimuli. Stimuli were collected by taking photographs of professional actors facial expression performance. Each individuals were presented with stories which set off certain emotions, and then were asked to choose a facial expression that the principal character would have made for the occasion presented in stories. The result showed that the ability of facial expression reading improves as the age get higher. Also, they performed better with the condition of face than eyes, and basic emotions than complex emotions. While female doesn't show any performance difference with the presenting areas, male shows better performance in case of facial condition compared with eye condition. The results demonstrate that age, facial expression's presenting areas and the type of emotions effect on estimation of other people's emotion through facial expressions.

  • PDF

Robust Facial Expression-Recognition Against Various Expression Intensity (표정 강도에 강건한 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.395-402
    • /
    • 2009
  • This paper proposes an approach of a novel facial expression recognition to deal with different intensities to improve a performance of a facial expression recognition. Various expressions and intensities of each person make an affect to decrease the performance of the facial expression recognition. The effect of different intensities of facial expression has been seldom focused on. In this paper, a face expression template and an expression-intensity distribution model are introduced to recognize different facial expression intensities. These techniques, facial expression template and expression-intensity distribution model contribute to improve the performance of facial expression recognition by describing how the shift between multiple interest points in the vicinity of facial parts and facial parts varies for different facial expressions and its intensities. The proposed method has the distinct advantage that facial expression recognition with different intensities can be very easily performed with a simple calibration on video sequences as well as still images. Experimental results show a robustness that the method can recognize facial expression with weak intensities.

The clinical study on 2 cases of patients with head and face symptoms of stress (Stress로 인한 두면부(頭面部) 증상(症狀) 치료(治療) 2례(例)에 대한 증례보고(證例報告))

  • Park, Jung-Hyeun;Lee, Hyun
    • Journal of Haehwa Medicine
    • /
    • v.15 no.1
    • /
    • pp.71-78
    • /
    • 2006
  • Objective : The purpose of this study is to report to treat two patients who had symptoms on head and face because of stress. Methods : The changes in symptoms of heat on right bucca, spasm of upp. er lip, left parietal pain, in individual expression were described as they were treated with acupuncture therapy named An-sim-bang(安心方), moxibustion and herb medicine. Results : Symptoms of heat on right bucca, spasm of upp. er lip, left parietal pain at admission improved and disapp eared gradually with acupuncture therapy named An-sim-bang(安心方), moxibustion and herb medicine. The patients could discharge with favorable recovery. Conclusion : In oriental medicine, stress is mainly treated by taking down flaring-up of heart fire, removing depression of Ki and fulling up deficiency of Yin of the kidneys. We experienced that these treatments by acupuncture therapy named An-sim-bang(安心方), moxibustion and herb medicine have the effect treating symptoms on head and face because of stress.

  • PDF

Bayesian Network Model for Human Fatigue Recognition (피로 인식을 위한 베이지안 네트워크 모델)

  • Lee Young-sik;Park Ho-sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.887-898
    • /
    • 2005
  • In this paper, we introduce a probabilistic model based on Bayesian networks BNs) for recognizing human fatigue. First of all, we measured face feature information such as eyelid movement, gaze, head movement, and facial expression by IR illumination. But, an individual face feature information does not provide enough information to determine human fatigue. Therefore in this paper, a Bayesian network model was constructed to fuse as many as possible fatigue cause parameters and face feature information for probabilistic inferring human fatigue. The MSBNX simulation result ending a 0.95 BN fatigue index threshold. As a result of the experiment, when comparisons are inferred BN fatigue index and the TOVA response time, there is a mutual correlation and from this information we can conclude that this method is very effective at recognizing a human fatigue.

Evaluation of Histograms Local Features and Dimensionality Reduction for 3D Face Verification

  • Ammar, Chouchane;Mebarka, Belahcene;Abdelmalik, Ouamane;Salah, Bourennane
    • Journal of Information Processing Systems
    • /
    • v.12 no.3
    • /
    • pp.468-488
    • /
    • 2016
  • The paper proposes a novel framework for 3D face verification using dimensionality reduction based on highly distinctive local features in the presence of illumination and expression variations. The histograms of efficient local descriptors are used to represent distinctively the facial images. For this purpose, different local descriptors are evaluated, Local Binary Patterns (LBP), Three-Patch Local Binary Patterns (TPLBP), Four-Patch Local Binary Patterns (FPLBP), Binarized Statistical Image Features (BSIF) and Local Phase Quantization (LPQ). Furthermore, experiments on the combinations of the four local descriptors at feature level using simply histograms concatenation are provided. The performance of the proposed approach is evaluated with different dimensionality reduction algorithms: Principal Component Analysis (PCA), Orthogonal Locality Preserving Projection (OLPP) and the combined PCA+EFM (Enhanced Fisher linear discriminate Model). Finally, multi-class Support Vector Machine (SVM) is used as a classifier to carry out the verification between imposters and customers. The proposed method has been tested on CASIA-3D face database and the experimental results show that our method achieves a high verification performance.

Local Similarity based Discriminant Analysis for Face Recognition

  • Xiang, Xinguang;Liu, Fan;Bi, Ye;Wang, Yanfang;Tang, Jinhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4502-4518
    • /
    • 2015
  • Fisher linear discriminant analysis (LDA) is one of the most popular projection techniques for feature extraction and has been widely applied in face recognition. However, it cannot be used when encountering the single sample per person problem (SSPP) because the intra-class variations cannot be evaluated. In this paper, we propose a novel method called local similarity based linear discriminant analysis (LS_LDA) to solve this problem. Motivated by the "divide-conquer" strategy, we first divide the face into local blocks, and classify each local block, and then integrate all the classification results to make final decision. To make LDA feasible for SSPP problem, we further divide each block into overlapped patches and assume that these patches are from the same class. To improve the robustness of LS_LDA to outliers, we further propose local similarity based median discriminant analysis (LS_MDA), which uses class median vector to estimate the class population mean in LDA modeling. Experimental results on three popular databases show that our methods not only generalize well SSPP problem but also have strong robustness to expression, illumination, occlusion and time variation.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

The Expression Pattern of the Tight Junction Protein Occludin in the Epidermal Context When Comparing Various Physical Samples (신체 부위별 표피에서 밀착연접 단백질 중 오클루딘의 발현도 연구)

  • Kim, Ji Sook;Jang, Hyung Seok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.47 no.4
    • /
    • pp.267-272
    • /
    • 2015
  • 'Tight junctions (TJ)' have recently been identified in the granular cell layer of the human epidermis, where they contribute to the normal adhesion between keratinocytes and to the physiologic barrier function of the epidermis. Among the TJ proteins in the epidermis, occludin is an important transmembrane protein, which is considered as a major component. The purpose of this study is to investigate whether regional variation exists in the expression of the tight junction protein occludin in normal human epidermis. Indirect immunofluorescence staining for occludin was performed with specimens taken from different areas of normal skin (4 from each of 7 different anatomical sites, including the scalp, face, posterior neck, upper arm, abdomen, lower back, and inner thigh). The degrees of the expression-intensity in each specimen were estimated with the reciprocals of positive end-point titer of occludin in an indirect immunofluorescence study. The highest degree expression-intensity of the TJ protein occludin among the different areas of normal epidermis was observed on the face and abdomen with a titer of 600 (p=0.001). The lowest intensity of expression of occludin was seen in the epidermis from the upper arm. Skin specimens from the scalp, neck, back, and leg demonstrated intermediate degrees of the expression in intensity. The expression of occludin in the skin samples obtained from different locations of the body showed a statistically significant variation. This suggests that there is a certain degree of regional variation in the expression-intensity of TJ protein 'occludin' in the human epidermis.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Face Recognition using Eigenface (고유얼굴에 의한 얼굴인식)

  • 박중조;김경민
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.1-6
    • /
    • 2001
  • Eigenface method in face recognition is useful due to its insensitivity to large variations in facial expression and facial details. However its low recognition rate necessitates additional researches. In this paper, we present an efficient method for improving the recognition rate in face recognition using eigenface feature. For this, we performs a comparative study of three different classifiers which are i) a single prototype (SP) classifier, ii) a nearest neighbor (NN) classifier, and iii) a standard feedforward neural network (FNN) classifier. By evaluating and analyzing the performance of these three classifiers, we shows that the distribution of eigenface features of face image is not compact and that selections of classifier and sample training data are important for obtaining higher recognition rate. Our experiments with the ORL face database show that 1-NN classifier outperforms the SP and FNN classifiers. We have achieved a recognition rate of 91.0% by selecting sample trainging data properly and using 1-NN classifier.

  • PDF