• Title/Summary/Keyword: Facial Image

Search Result 819, Processing Time 0.024 seconds

Development and Evaluation of an Self-Operated Face Capturing System (자가 안면영상 촬영장치 개발 및 검증)

  • Jeon, Young-Ju;Do, Jun-Hyeong;Kim, Jang-Wong;Kim, Sang-Gil;Lee, Hae-Jung;Lee, Yu-Jung;Kim, Keun-Ho;Kim, Jong-Yeol
    • Korean Journal of Oriental Medicine
    • /
    • v.17 no.2
    • /
    • pp.115-120
    • /
    • 2011
  • Objectives : The purpose of this study is to develop an apparatus which can take a facial image by self-operated capturing technique. The user can obtain one's facial image immediately after adjusting facial tilt and focusing distance. The system has been designed for classifying Sasang typology based on facial image. Methods : The system is composed of a Webcam, one-way glass mirror and mini LCD. The Webcam takes a facial image which is displayed on the mini LCD. Then the user can see and adjust to the right position in the real time through the image mirror-reflected from the mini LCD. The optical sensor is used to estimate the proper focusing distance. To verify the performance of the system, 11 characteristic points on the facial image are used and compared with high performance DSLR camera(D700) by applying the coefficient of variance and Bland-Altman Plot. Results : The developed system and D700 show enough agreement with the small coefficient of variance to analyse constitutional types with a facial im mage. However, the result of Bland-Altman plot shows that the width parameters have distortions owing to short focusing distance. Conclusions : The system is expected to be utilized on u-healthcare services for home environment after improving the distortion in the width parameters.

A study of age estimation from occluded images (가림이 있는 얼굴 영상의 나이 인식 연구)

  • Choi, Sung Eun
    • Journal of Platform Technology
    • /
    • v.10 no.3
    • /
    • pp.44-50
    • /
    • 2022
  • Research on facial age estimation is being actively conducted because it is used in various application fields. Facial images taken in various environments often have occlusions, and there is a problem in that performance of age estimation is degraded. Therefore, we propose age estimation method by creating an occluded part using image extrapolation technology to improve the age estimation performance of an occluded face image. In order to confirm the effect of occlusion in the image on the age estimation performance, an image with occlusion is generated using a mask image. The occluded part of facial image is restored using SpiralNet, which is one of the image extrapolation techniques, and it is a method to create an occluded part while crossing the edge of an image. Experimental results show that age estimation performance of occluded facial image is significantly degraded. It was confirmed that the age estimation performance is improved when using a face image with reconstructed occlusions using SpiralNet by experiments.

Recognizing Human Facial Expressions and Gesture from Image Sequence (연속 영상에서의 얼굴표정 및 제스처 인식)

  • 한영환;홍승홍
    • Journal of Biomedical Engineering Research
    • /
    • v.20 no.4
    • /
    • pp.419-425
    • /
    • 1999
  • In this paper, we present an algorithm of real time facial expression and gesture recognition for image sequence on the gray level. A mixture algorithm of a template matching and knowledge based geometrical consideration of a face were adapted to locate the face area in input image. And optical flow method applied on the area to recognize facial expressions. Also, we suggest hand area detection algorithm form a background image by analyzing entropy in an image. With modified hand area detection algorithm, it was possible to recognize hand gestures from it. As a results, the experiments showed that the suggested algorithm was good at recognizing one's facial expression and hand gesture by detecting a dominant motion area on images without getting any limits from the background image.

  • PDF

Facial Image Synthesis Considering Illumination Variations on Mobile Devices (모바일 기기에서 조명 변화를 고려한 얼굴 영상 합성)

  • Kwon, Ji-In;Lee, Sang-Hoon;Choi, Soo-Mi
    • Journal of the HCI Society of Korea
    • /
    • v.6 no.1
    • /
    • pp.21-26
    • /
    • 2011
  • This paper presents a robust method for facial image synthesis under varying illumination by combining illumination correction and Poisson image processing techniques. The presented method automatically detects skin area and corrects highly saturated regions that can cause bad effects on the final synthesis image. The developed method can be applied to various facial synthesis applications by correcting illumination variations that can occur frequently on photos taken with a camera phone.

  • PDF

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Knowledge based Text to Facial Sequence Image System for Interaction of Lecturer and Learner in Cyber Universities (가상대학에서 교수자와 학습자간 상호작용을 위한 지식기반형 문자-얼굴동영상 변환 시스템)

  • Kim, Hyoung-Geun;Park, Chul-Ha
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.179-188
    • /
    • 2008
  • In this paper, knowledge based text to facial sequence image system for interaction of lecturer and learner in cyber universities is studied. The system is defined by the synthesis of facial sequence image which is synchronized the lip according to the text information based on grammatical characteristic of hangul. For the implementation of the system, the transformation method that the text information is transformed into the phoneme code, the deformation rules of mouse shape which can be changed according to the code of phonemes, and the synthesis method of facial sequence image by using deformation rules of mouse shape are proposed. In the proposed method, all syllables of hangul are represented 10 principal mouse shape and 78 compound mouse shape according to the pronunciation characteristics of the basic consonants and vowels, and the characteristics of the articulation rules, respectively. To synthesize the real time facial sequence image able to realize the PC, the 88 mouth shape stored data base are used without the synthesis of mouse shape in each frame. To verify the validity of the proposed method the various synthesis of facial sequence image transformed from the text information is accomplished, and the system that can be applied the PC is implemented using the proposed method.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

Automatic Estimation of 2D Facial Muscle Parameter Using Neural Network (신경회로망을 이용한 2D 얼굴근육 파라메터의 자동인식)

  • 김동수;남기환;한준희;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.05a
    • /
    • pp.33-38
    • /
    • 1999
  • Muscle based face image synthesis is one of the most realistic approach to realize life-like agent in computer. Facial muscle model is composed of facial tissue elements and muscles. In this model, forces are calculated effecting facial tissue element by contraction of each muscle strength, so the combination of each muscle parameter decide a specific facial expression. Now each muscle parameter is decided on trial and error procedure comparing the sample photograph and generated image using our Muscle-Editor to generate a specific race image. In this paper, we propose the strategy of automatic estimation of facial muscle parameters from 2D marker movement using neural network. This also 3D motion estimation from 2D point or flow information in captered image under restriction of physics based fare model.

  • PDF

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Landmark Selection Using CNN-Based Heat Map for Facial Age Prediction (안면 연령 예측을 위한 CNN기반의 히트 맵을 이용한 랜드마크 선정)

  • Hong, Seok-Mi;Yoo, Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.1-6
    • /
    • 2021
  • The purpose of this study is to improve the performance of the artificial neural network system for facial image analysis through the image landmark selection technique. For landmark selection, a CNN-based multi-layer ResNet model for classification of facial image age is required. From the configured ResNet model, a heat map that detects the change of the output node according to the change of the input node is extracted. By combining a plurality of extracted heat maps, facial landmarks related to age classification prediction are created. The importance of each pixel location can be analyzed through facial landmarks. In addition, by removing the pixels with low weights, a significant amount of input data can be reduced.