• Title/Summary/Keyword: Facial Color Model

Search Result 72, Processing Time 0.04 seconds

Multiple Face Segmentation and Tracking Based on Robust Hausdorff Distance Matching

  • Park, Chang-Woo;Kim, Young-Ouk;Sung, Ha-Gyeong;Park, Mignon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.1
    • /
    • pp.87-92
    • /
    • 2003
  • This paper describes a system for tracking multiple faces in an input video sequence using facial convex hull based facial segmentation and robust hausdorff distance. The algorithm adapts skin color reference map in YCbCr color space and hair color reference map in RGB color space for classifying face region. Then, we obtain an initial face model with preprocessing and convex hull. For tracking, this algorithm computes displacement of the point set between frames using a robust hausdorff distance and the best possible displacement is selected. Finally, the initial face model is updated using the displacement. We provide an example to illustrate the proposed tracking algorithm, which efficiently tracks rotating and zooming faces as well as existing multiple faces in video sequences obtained from CCD camera.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Structural Design of Facial Contact Parts in Computerized Tongue Diagnosis System to Block Out External Light (외부광 차단을 위한 설진기 안면접촉부 설계)

  • Kim, Ji-Hye;Nam, Dong-Hyun
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.17 no.3
    • /
    • pp.225-232
    • /
    • 2013
  • Objectives The aim of this study is to design a part in contact with the face of computerized tongue diagnosis system (CTDS), so that external light is effectively shielded even if the facial appearance and degree of protrusion differ when a patient opens or closes his/her jaws. Methods Each of the 4 researchers manually produced clay models of the part in contact with the face of CTDS. Shielding and contact feeling of the clay models were evaluated by 20 assessors. Based on the evaluation, we selected the appropriate model and produced the final silicon model. Then we evaluated the performance of the shielding of the completed silicon model. We took tongue pictures of 60 participants with a CTDS applying the silicon model in condition with external light and without it. The color values in RGB color model and gray scale of the tongue pictures in condition with external light were compared with those without external light. Results There was no significant difference between the color values of the picture taken in condition with external light and those without external light. Conclusions We concluded that the produced part in contact with the face of CTDS can effectively block out the external light.

A Study on the Differences of Make-up Color Perception and Preference for the Development of Make-up Color System - Focused on a Female Model in Her Twenties - (메이크업 색채활용시스템 개발을 위한 화장색 이미지 지각 및 선호도 연구 - 20대 여성 모델을 중심으로 -)

  • Lee, Yon-Hee
    • The Research Journal of the Costume Culture
    • /
    • v.13 no.5 s.58
    • /
    • pp.712-728
    • /
    • 2005
  • This study consists of the stimuli of a female model in her twenties with twenty-three different facial make-up and survey on the differences of them for the development of make-up color system, based on the color-sense on the Korean's skin-tone and make-up color, to enforce the efficiency of beauty education. The result of this study and the suggestion is as followed. Firstly, Familiarity, Intelligence, Fitness, Charm, Tradition and Youth were came out as the result of factor analysis of make-up color image perception. Secondly, the stimulus of bare face was evaluated as more familiar and intelligent than the one with image make-up but perceived as unhealthy and not untraditional. Thirdly, skin tone had a big impact on both in lip color that's been applied in monotonous make-up and in image make-up that had been applied in contrastive make-up. Through these results, it is confirmed that the skin tone and make-up colors were influential variables in the research on facial image perception and preference against a female model in her 20s, and also the image test and preference can be changed according to the color contrasts. This research will be used as a basic tool for the development of make-up color applying system with image perception of statics of population variables and preference research. Also it aims to suggest the alternatives to perform the present collage make-up education for more systematic and organized education.

  • PDF

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.

Detection of Abnormal Region of Skin using Gabor Filter and Density-based Spatial Clustering of Applications with Noise (가버 필터와 밀도 기반 공간 클러스터링을 이용한 피부의 이상 영역 검출)

  • Jeon, Minseong;Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.117-129
    • /
    • 2018
  • In this paper, we suggest a new system that detects abnormal region of skim. First, an illumination elimination algorithm which uses LAB color model is processed on input facial image to obtain robust facial image for illumination, and then gabor filter is processed to detect the reactivity of discontinuity. And last, the density-based spatial clustering of applications with noise(DBSCAN) algorithm is processed to classify areas of wrinkles, dots, and other skin diseases. This method allows the user to check the skin condition of the images taken in real life.

Facial Boundary Detection using an Active Contour Model (활성 윤곽선 모델을 이용한 얼굴 경계선 추출)

  • Chang Jae Sik;Kim Eun Yi;Kim Hang Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.1
    • /
    • pp.79-87
    • /
    • 2005
  • This paper presents an active contour model for extracting accurate facial regions in complex environments. In the model, a contour is represented by a zero level set of level function φ, and evolved via level set partial differential equations. Then, unlike general active contours, skin color information that is represented by 2D Gaussian model is used for evolving and slopping a curve, which allows the proposed method to be robust to noise and varying pose. To assess the effectiveness of the proposed method it was tested with several natural scenes, and the results were compared with those of geodesic active contours. Experimental results demonstrate the superior performance of the proposed method.

Cold sensitivity classification using facial image based on convolutional neural network

  • lkoo Ahn;Younghwa Baek;Kwang-Ho Bae;Bok-Nam Seo;Kyoungsik Jung;Siwoo Lee
    • The Journal of Korean Medicine
    • /
    • v.44 no.4
    • /
    • pp.136-149
    • /
    • 2023
  • Objectives: Facial diagnosis is an important part of clinical diagnosis in traditional East Asian Medicine. In this paper, we proposed a model to quantitatively classify cold sensitivity using a fully automated facial image analysis system. Methods: We investigated cold sensitivity in 452 subjects. Cold sensitivity was determined using a questionnaire and the Cold Pattern Score (CPS) was used for analysis. Subjects with a CPS score below the first quartile (low CPS group) belonged to the cold non-sensitivity group, and subjects with a CPS score above the third quartile (high CPS group) belonged to the cold sensitivity group. After splitting the facial images into train/validation/test sets, the train and validation set were input into a convolutional neural network to learn the model, and then the classification accuracy was calculated for the test set. Results: The classification accuracy of the low CPS group and high CPS group using facial images in all subjects was 76.17%. The classification accuracy by sex was 69.91% for female and 62.86% for male. It is presumed that the deep learning model used facial color or facial shape to classify the low CPS group and the high CPS group, but it is difficult to specifically determine which feature was more important. Conclusions: The experimental results of this study showed that the low CPS group and the high CPS group can be classified with a modest level of accuracy using only facial images. There was a need to develop more advanced models to increase classification accuracy.

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF