• Title/Summary/Keyword: Face expression

Search Result 453, Processing Time 0.026 seconds

Detection of Face Direction by Using Inter-Frame Difference

  • Jang, Bongseog;Bae, Sang-Hyun
    • Journal of Integrative Natural Science
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2016
  • Applying image processing techniques to education, the face of the learner is photographed, and expression and movement are detected from video, and the system which estimates degree of concentration of the learner is developed. For one learner, the measuring system is designed in terms of estimating a degree of concentration from direction of line of learner's sight and condition of the eye. In case of multiple learners, it must need to measure each concentration level of all learners in the classroom. But it is inefficient because one camera per each learner is required. In this paper, position in the face region is estimated from video which photographs the learner in the class by the difference between frames within the motion direction. And the system which detects the face direction by the face part detection by template matching is proposed. From the result of the difference between frames in the first image of the video, frontal face detection by Viola-Jones method is performed. Also the direction of the motion which arose in the face region is estimated with the migration length and the face region is tracked. Then the face parts are detected to tracking. Finally, the direction of the face is estimated from the result of face tracking and face parts detection.

Face Detection Using Pixel Direction Code and Look-Up Table Classifier (픽셀 방향코드와 룩업테이블 분류기를 이용한 얼굴 검출)

  • Lim, Kil-Taek;Kang, Hyunwoo;Han, Byung-Gil;Lee, Jong Taek
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.9 no.5
    • /
    • pp.261-268
    • /
    • 2014
  • Face detection is essential to the full automation of face image processing application system such as face recognition, facial expression recognition, age estimation and gender identification. It is found that local image features which includes Haar-like, LBP, and MCT and the Adaboost algorithm for classifier combination are very effective for real time face detection. In this paper, we present a face detection method using local pixel direction code(PDC) feature and lookup table classifiers. The proposed PDC feature is much more effective to dectect the faces than the existing local binary structural features such as MCT and LBP. We found that our method's classification rate as well as detection rate under equal false positive rate are higher than conventional one.

Realtime Face Recognition by Analysis of Feature Information (특징정보 분석을 통한 실시간 얼굴인식)

  • Chung, Jae-Mo;Bae, Hyun;Kim, Sung-Shin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.299-302
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region of face candidate. The feature information in the region of the face candidate is used to detect the face region. In the recognition step, as a tested, the 120 images of 10 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression. Input variables of the neural networks are the geometrical feature information and the feature information that comes from the eigenface spaces. The simulation results of$.$10 persons show that the proposed method yields high recognition rates.

  • PDF

3D Face Modeling based on FACS (Facial Action Coding System) (FACS 기반을 둔 3D 얼굴 모델링)

  • Oh, Du-Sik;Kim, Yu-Sung;Kim, Jae-Min;Cho, Seoung-Won;Chung, Sun-Tae
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1015-1016
    • /
    • 2008
  • In this paper, the method which searchs a character of face and transforms it by FACS(Facial Action Coding System) for face modeling is suggested. FACS has a function to build an expression of face to AUs(Action Units) and make various face expressions. The system performs to find accurate Action Units of sample face and use setted AUs. Consequently it carries out the coefficient for transforming face model by 2D AUs matching.

  • PDF

Realtime Face Recognition by Analysis of Feature Information (특징정보 분석을 통한 실시간 얼굴인식)

  • Chung, Jae-Mo;Bae, Hyun;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.9
    • /
    • pp.822-826
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region of face candidate. The feature information in the region of the face candidate is used to detect the face region. In the recognition step, as a tested, the 120 images of 10 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression. Input variables of the neural networks are the geometrical feature information and the feature information that comes from the eigenface spaces. The simulation results of 10 persons show that the proposed method yields high recognition rates.

  • PDF

Reconstruction from Feature Points of Face through Fuzzy C-Means Clustering Algorithm with Gabor Wavelets (FCM 군집화 알고리즘에 의한 얼굴의 특징점에서 Gabor 웨이브렛을 이용한 복원)

  • 신영숙;이수용;이일병;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.53-58
    • /
    • 2000
  • This paper reconstructs local region of a facial expression image from extracted feature points of facial expression image using FCM(Fuzzy C-Meang) clustering algorithm with Gabor wavelets. The feature extraction in a face is two steps. In the first step, we accomplish the edge extraction of main components of face using average value of 2-D Gabor wavelets coefficient histogram of image and in the next step, extract final feature points from the extracted edge information using FCM clustering algorithm. This study presents that the principal components of facial expression images can be reconstructed with only a few feature points extracted from FCM clustering algorithm. It can also be applied to objects recognition as well as facial expressions recognition.

  • PDF

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

Face Recognition using Emotional Face Images and Fuzzy Fisherface (감정이 있는 얼굴영상과 퍼지 Fisherface를 이용한 얼굴인식)

  • Koh, Hyun-Joo;Chun, Myung-Geun;Paliwal, K.K.
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.94-98
    • /
    • 2009
  • In this paper, we deal with a face recognition method for the emotional face images. Since the face recognition is one of the most natural and straightforward biometric methods, there have been various research works. However, most of them are focused on the expressionless face images and have had a very difficult problem if we consider the facial expression. In real situations, however, it is required to consider the emotional face images. Here, three basic human emotions such as happiness, sadness, and anger are investigated for the face recognition. And, this situation requires a robust face recognition algorithm then we use a fuzzy Fisher's Linear Discriminant (FLD) algorithm with the wavelet transform. The fuzzy Fisherface is a statistical method that maximizes the ratio of between-scatter matrix and within-scatter matrix and also handles the fuzzy class information. The experimental results obtained for the CBNU face databases reveal that the approach presented in this paper yields better recognition performance in comparison with the results obtained by other recognition methods.

On Parameterizing of Human Expression Using ICA (독립 요소 분석을 이용한 얼굴 표정의 매개변수화)

  • Song, Ji-Hey;Shin, Hyun-Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.1
    • /
    • pp.7-15
    • /
    • 2009
  • In this paper, a novel framework that synthesizes and clones facial expression in parameter spaces is presented. To overcome the difficulties in manipulating face geometry models with high degrees of freedom, many parameterization methods have been introduced. In this paper, a data-driven parameterization method is proposed that represents a variety of expressions with a small set of fundamental independent movements based on the ICA technique. The face deformation due to the parameters is also learned from the data to capture the nonlinearity of facial movements. With this parameterization, one can control the expression of an animated character's face by the parameters. By separating the parameterization and the deformation learning process, we believe that we can adopt this framework for a variety applications including expression synthesis and cloning. The experimental result demonstrates the efficient production of realistic expressions using the proposed method.

  • PDF

The improved facial expression recognition algorithm for detecting abnormal symptoms in infants and young children (영유아 이상징후 감지를 위한 표정 인식 알고리즘 개선)

  • Kim, Yun-Su;Lee, Su-In;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.430-436
    • /
    • 2021
  • The non-contact body temperature measurement system is one of the key factors, which is manage febrile diseases in mass facilities using optical and thermal imaging cameras. Conventional systems can only be used for simple body temperature measurement in the face area, because it is used only a deep learning-based face detection algorithm. So, there is a limit to detecting abnormal symptoms of the infants and young children, who have difficulty expressing their opinions. This paper proposes an improved facial expression recognition algorithm for detecting abnormal symptoms in infants and young children. The proposed method uses an object detection model to detect infants and young children in an image, then It acquires the coordinates of the eyes, nose, and mouth, which are key elements of facial expression recognition. Finally, facial expression recognition is performed by applying a selective sharpening filter based on the obtained coordinates. According to the experimental results, the proposed algorithm improved by 2.52%, 1.12%, and 2.29%, respectively, for the three expressions of neutral, happy, and sad in the UTK dataset.