• Title/Summary/Keyword: Facial image

Search Result 824, Processing Time 0.025 seconds

Facial Region Tracking by Infra-red and CCD Color Image (CCD 컬러 영상과 적외선 영상을 이용한 얼굴 영역 검출)

  • Yoon, T.H.;Kim, K.S.;Han, M.H.;Shin, S.W.;Kim, I.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.60-62
    • /
    • 2005
  • In this study, the automatic tracking algorithm tracing a human face is proposed by using YCbCr color coordinated information and its thermal properties expressed in terms of thermal indexes in an infra-red image. The facial candidates are separately estimated in CbCr color and infra-red domain, respectively with applying the morphological image processing operations and the geometrical shape measures for fitting the elliptical features of a human face. The identification of a true face is accomplished by logical 'AND' operation between the refined image in CbCr color and infra-red domain.

  • PDF

3D Head Pose Estimation Using The Stereo Image (스테레오 영상을 이용한 3차원 포즈 추정)

  • 양욱일;송환종;이용욱;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1887-1890
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm using the stereo image. Given a pair of stereo image, we automatically extract several important facial feature points using the disparity map, the gabor filter and the canny edge detector. To detect the facial feature region , we propose a region dividing method using the disparity map. On the indoor head & shoulder stereo image, a face region has a larger disparity than a background. So we separate a face region from a background by a divergence of disparity. To estimate 3D head pose, we propose a 2D-3D Error Compensated-SVD (EC-SVD) algorithm. We estimate the 3D coordinates of the facial features using the correspondence of a stereo image. We can estimate the head pose of an input image using Error Compensated-SVD (EC-SVD) method. Experimental results show that the proposed method is capable of estimating pose accurately.

  • PDF

Chronological Changes of Women's Ideal Beauty through Facial Image and Fashion of Korean Actress in the Late Twentieth Century (20세기 후반 한국 여성 스타의 얼굴 이미지와 패션을 통해 본 이상적 여성미의 변천)

  • Baek, Kyoung-Jin;Hahn, So-Won;Kim, Young-In
    • Journal of the Korean Society of Costume
    • /
    • v.62 no.5
    • /
    • pp.44-58
    • /
    • 2012
  • The purpose of this research is to contemplate chronological changes of Korean actress facial image and fashion from 1960s to 1990s and to identify Korean women's ideal beauty reflected through the times. Adjectives describing representative actresses of each studied decade were collected from major newspapers and magazines. Korean women's ideal beauty was divided into 4 sub-types such as youthful, pure, sophisticate, and sexy images. As a result of analyzing actress facial image and fashion, youthful and pure beauties were found consistently over the studied periods. Representative characteristics of sophisticate and sexy beauties have been changed over the studied periods which were influenced by socio-cultural environment factors. The result of this research can provide meaningful sources for historical drama, celebrity marketing strategy planning, and personal image consulting.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.

Emotion Training: Image Color Transfer with Facial Expression and Emotion Recognition (감정 트레이닝: 얼굴 표정과 감정 인식 분석을 이용한 이미지 색상 변환)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.1-9
    • /
    • 2018
  • We propose an emotional training framework that can determine the initial symptom of schizophrenia by using emotional analysis method through facial expression change. We use Emotion API in Microsoft to obtain facial expressions and emotion values at the present time. We analyzed these values and recognized subtle facial expressions that change with time. The emotion states were classified according to the peak analysis-based variance method in order to measure the emotions appearing in facial expressions according to time. The proposed method analyzes the lack of emotional recognition and expressive ability by using characteristics that are different from the emotional state changes classified according to the six basic emotions proposed by Ekman. As a result, the analyzed values are integrated into the image color transfer framework so that users can easily recognize and train their own emotional changes.

facial Expression Animation Using 3D Face Modelling of Anatomy Base (해부학 기반의 3차원 얼굴 모델링을 이용한 얼굴 표정 애니메이션)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.328-333
    • /
    • 2003
  • This paper did to do with 18 muscle pairs that do fetters in anatomy that influence in facial expression change and mix motion of muscle for face facial animation. After set and change mash and make standard model in individual's image, did mapping to mash using individual facial front side and side image to raise truth stuff. Muscle model who become motive power that can do animation used facial expression creation correcting Waters' muscle model. Created deformed face that texture is dressed using these method. Also, 6 facial expression that Ekman proposes did animation.

Linear accuracy of cone-beam computed tomography and a 3-dimensional facial scanning system: An anthropomorphic phantom study

  • Oh, Song Hee;Kang, Ju Hee;Seo, Yu-Kyeong;Lee, Sae Rom;Choi, Hwa-Young;Choi, Yong-Suk;Hwang, Eui-Hwan
    • Imaging Science in Dentistry
    • /
    • v.48 no.2
    • /
    • pp.111-119
    • /
    • 2018
  • Purpose: This study was conducted to evaluate the accuracy of linear measurements of 3-dimensional (3D) images generated by cone-beam computed tomography (CBCT) and facial scanning systems, and to assess the effect of scanning parameters, such as CBCT exposure settings, on image quality. Materials and Methods: CBCT and facial scanning images of an anthropomorphic phantom showing 13 soft-tissue anatomical landmarks were used in the study. The distances between the anatomical landmarks on the phantom were measured to obtain a reference for evaluating the accuracy of the 3D facial soft-tissue images. The distances between the 3D image landmarks were measured using a 3D distance measurement tool. The effect of scanning parameters on CBCT image quality was evaluated by visually comparing images acquired under different exposure conditions, but at a constant threshold. Results: Comparison of the repeated direct phantom and image-based measurements revealed good reproducibility. There were no significant differences between the direct phantom and image-based measurements of the CBCT surface volume-rendered images. Five of the 15 measurements of the 3D facial scans were found to be significantly different from their corresponding direct phantom measurements(P<.05). The quality of the CBCT surface volume-rendered images acquired at a constant threshold varied across different exposure conditions. Conclusion: These results proved that existing 3D imaging techniques were satisfactorily accurate for clinical applications, and that optimizing the variables that affected image quality, such as the exposure parameters, was critical for image acquisition.

Local Appearance-based Face Recognition Using SVM and PCA (SVM과 PCA를 이용한 국부 외형 기반 얼굴 인식 방법)

  • Park, Seung-Hwan;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.54-60
    • /
    • 2010
  • The local appearance-based method is one of the face recognition methods that divides face image into small areas and extracts features from each area of face image using statistical analysis. It collects classification results of each area and decides identity of a face image using a voting scheme by integrating classification results of each area of a face image. The conventional local appearance-based method divides face images into small pieces and uses all the pieces in recognition process. In this paper, we propose a local appearance-based method that makes use of only the relatively important facial components. The proposed method detects the facial components such as eyes, nose and mouth that differs much from person to person. In doing so, the proposed method detects exact locations of facial components using support vector machines (SVM). Based on the detected facial components, a number of small images that contain the facial parts are constructed. Then it extracts features from each facial component image using principal components analysis (PCA). We compared the performance of the proposed method to those of the conventional methods. The results show that the proposed method outperforms the conventional local appearance-based method while preserving the advantages of the conventional local appearance-based method.

Robust Real-time Face Detection Scheme on Various illumination Conditions (다양한 조명 환경에 강인한 실시간 얼굴확인 기법)

  • Kim, Soo-Hyun;Han, Young-Joon;Cha, Hyung-Tai;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.7
    • /
    • pp.821-829
    • /
    • 2004
  • A face recognition has been used for verifying and authorizing valid users, but its applications have been restricted according to lighting conditions. In order to minimizing the restricted conditions, this paper proposes a new algorithm of detecting the face from the input image obtained under the irregular lighting condition. First, the proposed algorithm extracts an edge difference image from the input image where a skin color and a face contour are disappeared due to the background color or the lighting direction. In the next step, it extracts a face region using the histogram of the edge difference image and the intensity information. Using the intensity information, the face region is divided into the horizontal regions with feasible facial features. The each of horizontal regions is classified as three groups with the facial features(including eye, nose, and mouth) and the facial features are extracted using empirical properties of the facial features. Only when the facial features satisfy their topological rules, the face region is considered as a face. It has been proved by the experiments that the proposed algorithm can detect faces even when the large portion of face contour is lost due to the inadequate lighting condition or the image background color is similar to the skin color.